Test Report: Docker_Linux_crio 22122

                    
                      022dd2780ab8206ac68153a1ee37fdbcc6da7ccd:2025-12-13:42761
                    
                

Test fail (29/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.35
44 TestAddons/parallel/Registry 13.29
45 TestAddons/parallel/RegistryCreds 0.42
46 TestAddons/parallel/Ingress 149.91
47 TestAddons/parallel/InspektorGadget 5.28
48 TestAddons/parallel/MetricsServer 5.31
50 TestAddons/parallel/CSI 54.88
51 TestAddons/parallel/Headlamp 2.57
52 TestAddons/parallel/CloudSpanner 5.25
53 TestAddons/parallel/LocalPath 10.11
54 TestAddons/parallel/NvidiaDevicePlugin 5.25
55 TestAddons/parallel/Yakd 5.26
56 TestAddons/parallel/AmdGpuDevicePlugin 5.25
125 TestFunctional/parallel/ImageCommands/ImageListShort 2.28
126 TestFunctional/parallel/ImageCommands/ImageListTable 2.28
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 5.26
294 TestJSONOutput/pause/Command 1.53
300 TestJSONOutput/unpause/Command 1.74
366 TestPause/serial/Pause 6.59
449 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.51
454 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.37
459 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.31
466 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.2
471 TestStartStop/group/old-k8s-version/serial/Pause 5.36
480 TestStartStop/group/no-preload/serial/Pause 5.73
483 TestStartStop/group/embed-certs/serial/Pause 6.29
485 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.14
494 TestStartStop/group/newest-cni/serial/Pause 5.98
496 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.25
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable volcano --alsologtostderr -v=1: exit status 11 (350.293213ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:06:48.846133  403694 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:06:48.846467  403694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:06:48.846481  403694 out.go:374] Setting ErrFile to fd 2...
	I1213 13:06:48.846488  403694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:06:48.846975  403694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:06:48.847752  403694 mustload.go:66] Loading cluster: addons-802674
	I1213 13:06:48.848146  403694 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:48.848170  403694 addons.go:622] checking whether the cluster is paused
	I1213 13:06:48.848252  403694 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:48.848265  403694 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:06:48.848630  403694 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:06:48.867487  403694 ssh_runner.go:195] Run: systemctl --version
	I1213 13:06:48.867572  403694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:06:48.885582  403694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:06:48.980219  403694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:06:48.980324  403694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:06:49.011656  403694 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:06:49.011681  403694 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:06:49.011687  403694 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:06:49.011692  403694 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:06:49.011696  403694 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:06:49.011714  403694 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:06:49.011720  403694 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:06:49.011725  403694 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:06:49.011731  403694 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:06:49.011744  403694 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:06:49.011752  403694 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:06:49.011756  403694 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:06:49.011759  403694 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:06:49.011761  403694 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:06:49.011764  403694 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:06:49.011791  403694 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:06:49.011800  403694 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:06:49.011806  403694 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:06:49.011809  403694 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:06:49.011812  403694 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:06:49.011815  403694 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:06:49.011817  403694 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:06:49.011820  403694 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:06:49.011823  403694 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:06:49.011825  403694 cri.go:89] found id: ""
	I1213 13:06:49.011867  403694 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:06:49.059008  403694 out.go:203] 
	W1213 13:06:49.117792  403694 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:06:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:06:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:06:49.117819  403694 out.go:285] * 
	* 
	W1213 13:06:49.122111  403694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:06:49.127974  403694 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.228011ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003314963s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003394187s
addons_test.go:394: (dbg) Run:  kubectl --context addons-802674 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-802674 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-802674 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.832090502s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 ip
2025/12/13 13:07:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable registry --alsologtostderr -v=1: exit status 11 (247.909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:12.165820  406260 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:12.165956  406260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:12.165968  406260 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:12.165975  406260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:12.166182  406260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:12.166478  406260 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:12.166856  406260 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:12.166880  406260 addons.go:622] checking whether the cluster is paused
	I1213 13:07:12.166963  406260 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:12.166976  406260 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:12.167348  406260 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:12.186306  406260 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:12.186692  406260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:12.205986  406260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:12.302032  406260 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:12.302119  406260 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:12.330452  406260 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:12.330475  406260 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:12.330481  406260 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:12.330486  406260 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:12.330490  406260 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:12.330495  406260 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:12.330499  406260 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:12.330504  406260 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:12.330508  406260 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:12.330516  406260 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:12.330522  406260 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:12.330527  406260 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:12.330532  406260 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:12.330538  406260 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:12.330543  406260 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:12.330556  406260 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:12.330564  406260 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:12.330570  406260 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:12.330573  406260 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:12.330577  406260 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:12.330585  406260 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:12.330589  406260 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:12.330593  406260 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:12.330596  406260 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:12.330600  406260 cri.go:89] found id: ""
	I1213 13:07:12.330651  406260 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:12.344198  406260 out.go:203] 
	W1213 13:07:12.345639  406260 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:12.345662  406260 out.go:285] * 
	* 
	W1213 13:07:12.350102  406260 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:12.351379  406260 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.29s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.572215ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-802674
addons_test.go:334: (dbg) Run:  kubectl --context addons-802674 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (243.287858ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:07.157905  405668 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:07.158001  405668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:07.158008  405668 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:07.158012  405668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:07.158192  405668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:07.158437  405668 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:07.158767  405668 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:07.158804  405668 addons.go:622] checking whether the cluster is paused
	I1213 13:07:07.158896  405668 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:07.158910  405668 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:07.159277  405668 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:07.176172  405668 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:07.176225  405668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:07.192745  405668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:07.292123  405668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:07.292213  405668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:07.320854  405668 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:07.320876  405668 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:07.320881  405668 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:07.320884  405668 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:07.320887  405668 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:07.320890  405668 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:07.320893  405668 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:07.320896  405668 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:07.320899  405668 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:07.320911  405668 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:07.320914  405668 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:07.320917  405668 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:07.320920  405668 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:07.320923  405668 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:07.320926  405668 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:07.320933  405668 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:07.320939  405668 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:07.320943  405668 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:07.320946  405668 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:07.320949  405668 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:07.320954  405668 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:07.320956  405668 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:07.320959  405668 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:07.320962  405668 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:07.320964  405668 cri.go:89] found id: ""
	I1213 13:07:07.321003  405668 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:07.334493  405668 out.go:203] 
	W1213 13:07:07.335592  405668 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:07.335617  405668 out.go:285] * 
	* 
	W1213 13:07:07.339552  405668 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:07.340888  405668 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-802674 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-802674 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-802674 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [e6c288ee-b7e2-4d37-af79-4a02b527056f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [e6c288ee-b7e2-4d37-af79-4a02b527056f] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003749264s
I1213 13:07:14.808522  394130 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.481753493s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-802674 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-802674
helpers_test.go:244: (dbg) docker inspect addons-802674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd",
	        "Created": "2025-12-13T13:05:07.436979754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 396538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:05:07.468333702Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/hosts",
	        "LogPath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd-json.log",
	        "Name": "/addons-802674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-802674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-802674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd",
	                "LowerDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-802674",
	                "Source": "/var/lib/docker/volumes/addons-802674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-802674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-802674",
	                "name.minikube.sigs.k8s.io": "addons-802674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9b7891cc6378426857df09cba08a56ab9633cf1ab32151364aff4fdc3cf11f57",
	            "SandboxKey": "/var/run/docker/netns/9b7891cc6378",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-802674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "761dfdd70f6193271784b82d834b359f6576b740ac6d713183b9e21d7d14e9a1",
	                    "EndpointID": "757048b00cb230b2b4d7c6bdcb1de7c61e1295333cd75069a97fae99a1d19210",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "32:57:aa:2d:c9:90",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-802674",
	                        "270e64e091ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-802674 -n addons-802674
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-802674 logs -n 25: (1.139006177s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-589486 --alsologtostderr --binary-mirror http://127.0.0.1:44049 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-589486 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ -p binary-mirror-589486                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-589486 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ addons  │ disable dashboard -p addons-802674                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ addons  │ enable dashboard -p addons-802674                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ start   │ -p addons-802674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:06 UTC │
	│ addons  │ addons-802674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ addons  │ addons-802674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ addons  │ enable headlamp -p addons-802674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ addons  │ addons-802674 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-802674                                                                                                                                                                                                                                                                                                                                                                                           │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │ 13 Dec 25 13:07 UTC │
	│ addons  │ addons-802674 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ ip      │ addons-802674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │ 13 Dec 25 13:07 UTC │
	│ addons  │ addons-802674 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ ssh     │ addons-802674 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ ssh     │ addons-802674 ssh cat /opt/local-path-provisioner/pvc-c7df13f3-7532-4920-a88e-f3a79a290a56_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │ 13 Dec 25 13:07 UTC │
	│ addons  │ addons-802674 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ addons  │ addons-802674 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:07 UTC │                     │
	│ ip      │ addons-802674 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-802674        │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:44.951695  395903 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:44.951975  395903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:44.951988  395903 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:44.951992  395903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:44.952172  395903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:04:44.952648  395903 out.go:368] Setting JSON to false
	I1213 13:04:44.953550  395903 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6433,"bootTime":1765624652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:44.953602  395903 start.go:143] virtualization: kvm guest
	I1213 13:04:44.955322  395903 out.go:179] * [addons-802674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:44.956540  395903 notify.go:221] Checking for updates...
	I1213 13:04:44.956566  395903 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:04:44.957828  395903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:44.959074  395903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:04:44.960190  395903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:04:44.961216  395903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:04:44.962302  395903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:04:44.963872  395903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:44.986753  395903 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:44.986866  395903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:45.043388  395903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:04:45.03440041 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:45.043497  395903 docker.go:319] overlay module found
	I1213 13:04:45.045051  395903 out.go:179] * Using the docker driver based on user configuration
	I1213 13:04:45.046015  395903 start.go:309] selected driver: docker
	I1213 13:04:45.046034  395903 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:45.046051  395903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:04:45.046671  395903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:45.098882  395903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:04:45.089450922 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:45.099045  395903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:45.099250  395903 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:04:45.100737  395903 out.go:179] * Using Docker driver with root privileges
	I1213 13:04:45.101823  395903 cni.go:84] Creating CNI manager for ""
	I1213 13:04:45.101905  395903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:04:45.101920  395903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:04:45.102004  395903 start.go:353] cluster config:
	{Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 13:04:45.103353  395903 out.go:179] * Starting "addons-802674" primary control-plane node in "addons-802674" cluster
	I1213 13:04:45.104449  395903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:04:45.105447  395903 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:04:45.106611  395903 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:04:45.106648  395903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:04:45.106677  395903 cache.go:65] Caching tarball of preloaded images
	I1213 13:04:45.106732  395903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:04:45.106896  395903 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:04:45.106919  395903 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:04:45.107282  395903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/config.json ...
	I1213 13:04:45.107309  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/config.json: {Name:mkecf58a651585115de101f1a06b6b9ad5bfd689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:04:45.122846  395903 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:04:45.122959  395903 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 13:04:45.122974  395903 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 13:04:45.122978  395903 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 13:04:45.122988  395903 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 13:04:45.122994  395903 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 13:04:56.836949  395903 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 13:04:56.836991  395903 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:04:56.837037  395903 start.go:360] acquireMachinesLock for addons-802674: {Name:mk0ce315a4c9f97eec976638407460e431021c73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:04:56.837137  395903 start.go:364] duration metric: took 77.132µs to acquireMachinesLock for "addons-802674"
	I1213 13:04:56.837159  395903 start.go:93] Provisioning new machine with config: &{Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:04:56.837234  395903 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:04:56.838884  395903 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 13:04:56.839120  395903 start.go:159] libmachine.API.Create for "addons-802674" (driver="docker")
	I1213 13:04:56.839157  395903 client.go:173] LocalClient.Create starting
	I1213 13:04:56.839252  395903 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:04:56.888168  395903 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:04:56.941645  395903 cli_runner.go:164] Run: docker network inspect addons-802674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:04:56.958586  395903 cli_runner.go:211] docker network inspect addons-802674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:04:56.958671  395903 network_create.go:284] running [docker network inspect addons-802674] to gather additional debugging logs...
	I1213 13:04:56.958688  395903 cli_runner.go:164] Run: docker network inspect addons-802674
	W1213 13:04:56.974960  395903 cli_runner.go:211] docker network inspect addons-802674 returned with exit code 1
	I1213 13:04:56.974994  395903 network_create.go:287] error running [docker network inspect addons-802674]: docker network inspect addons-802674: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-802674 not found
	I1213 13:04:56.975008  395903 network_create.go:289] output of [docker network inspect addons-802674]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-802674 not found
	
	** /stderr **
	I1213 13:04:56.975097  395903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:04:56.990971  395903 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7c410}
	I1213 13:04:56.991012  395903 network_create.go:124] attempt to create docker network addons-802674 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 13:04:56.991058  395903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-802674 addons-802674
	I1213 13:04:57.242026  395903 network_create.go:108] docker network addons-802674 192.168.49.0/24 created
	I1213 13:04:57.242070  395903 kic.go:121] calculated static IP "192.168.49.2" for the "addons-802674" container
	I1213 13:04:57.242140  395903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:04:57.258327  395903 cli_runner.go:164] Run: docker volume create addons-802674 --label name.minikube.sigs.k8s.io=addons-802674 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:04:57.310915  395903 oci.go:103] Successfully created a docker volume addons-802674
	I1213 13:04:57.310993  395903 cli_runner.go:164] Run: docker run --rm --name addons-802674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-802674 --entrypoint /usr/bin/test -v addons-802674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:05:03.623744  395903 cli_runner.go:217] Completed: docker run --rm --name addons-802674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-802674 --entrypoint /usr/bin/test -v addons-802674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (6.312704028s)
	I1213 13:05:03.623798  395903 oci.go:107] Successfully prepared a docker volume addons-802674
	I1213 13:05:03.623901  395903 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:05:03.623920  395903 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:05:03.623999  395903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-802674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:05:07.365297  395903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-802674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.741234248s)
	I1213 13:05:07.365334  395903 kic.go:203] duration metric: took 3.741411187s to extract preloaded images to volume ...
	W1213 13:05:07.365440  395903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:05:07.365480  395903 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:05:07.365529  395903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:05:07.421238  395903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-802674 --name addons-802674 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-802674 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-802674 --network addons-802674 --ip 192.168.49.2 --volume addons-802674:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:05:07.685223  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Running}}
	I1213 13:05:07.703335  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:07.719634  395903 cli_runner.go:164] Run: docker exec addons-802674 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:05:07.766098  395903 oci.go:144] the created container "addons-802674" has a running status.
	I1213 13:05:07.766142  395903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa...
	I1213 13:05:07.814959  395903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:05:07.844868  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:07.862673  395903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:05:07.862692  395903 kic_runner.go:114] Args: [docker exec --privileged addons-802674 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:05:07.902890  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:07.923315  395903 machine.go:94] provisionDockerMachine start ...
	I1213 13:05:07.923415  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:07.945278  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:07.945638  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:07.945661  395903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:05:07.946339  395903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56446->127.0.0.1:33143: read: connection reset by peer
	I1213 13:05:11.076393  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-802674
	
	I1213 13:05:11.076421  395903 ubuntu.go:182] provisioning hostname "addons-802674"
	I1213 13:05:11.076486  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.094349  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:11.094607  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:11.094629  395903 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-802674 && echo "addons-802674" | sudo tee /etc/hostname
	I1213 13:05:11.234125  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-802674
	
	I1213 13:05:11.234211  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.251529  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:11.251806  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:11.251836  395903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-802674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-802674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-802674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:05:11.383057  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:05:11.383107  395903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:05:11.383145  395903 ubuntu.go:190] setting up certificates
	I1213 13:05:11.383166  395903 provision.go:84] configureAuth start
	I1213 13:05:11.383231  395903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-802674
	I1213 13:05:11.400319  395903 provision.go:143] copyHostCerts
	I1213 13:05:11.400418  395903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:05:11.400534  395903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:05:11.400608  395903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:05:11.400662  395903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.addons-802674 san=[127.0.0.1 192.168.49.2 addons-802674 localhost minikube]
	I1213 13:05:11.447356  395903 provision.go:177] copyRemoteCerts
	I1213 13:05:11.447414  395903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:05:11.447449  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.465388  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:11.560753  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:05:11.578971  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 13:05:11.595573  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:05:11.612306  395903 provision.go:87] duration metric: took 229.11797ms to configureAuth
	I1213 13:05:11.612327  395903 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:05:11.612493  395903 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:05:11.612610  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.629684  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:11.629924  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:11.629940  395903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:05:11.897039  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:05:11.897065  395903 machine.go:97] duration metric: took 3.973724993s to provisionDockerMachine
	I1213 13:05:11.897077  395903 client.go:176] duration metric: took 15.057911494s to LocalClient.Create
	I1213 13:05:11.897097  395903 start.go:167] duration metric: took 15.057978862s to libmachine.API.Create "addons-802674"
	I1213 13:05:11.897105  395903 start.go:293] postStartSetup for "addons-802674" (driver="docker")
	I1213 13:05:11.897115  395903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:05:11.897172  395903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:05:11.897206  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.914876  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.011356  395903 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:05:12.014681  395903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:05:12.014708  395903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:05:12.014721  395903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:05:12.014809  395903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:05:12.014844  395903 start.go:296] duration metric: took 117.732854ms for postStartSetup
	I1213 13:05:12.015108  395903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-802674
	I1213 13:05:12.032887  395903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/config.json ...
	I1213 13:05:12.033184  395903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:05:12.033261  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:12.050942  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.142759  395903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:05:12.147068  395903 start.go:128] duration metric: took 15.309818955s to createHost
	I1213 13:05:12.147091  395903 start.go:83] releasing machines lock for "addons-802674", held for 15.309943372s
	I1213 13:05:12.147156  395903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-802674
	I1213 13:05:12.164286  395903 ssh_runner.go:195] Run: cat /version.json
	I1213 13:05:12.164340  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:12.164389  395903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:05:12.164472  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:12.182041  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.182434  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.327040  395903 ssh_runner.go:195] Run: systemctl --version
	I1213 13:05:12.333452  395903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:05:12.367221  395903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:05:12.371959  395903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:05:12.372033  395903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:05:12.397050  395903 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:05:12.397077  395903 start.go:496] detecting cgroup driver to use...
	I1213 13:05:12.397108  395903 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:05:12.397148  395903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:05:12.412619  395903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:05:12.424101  395903 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:05:12.424172  395903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:05:12.439536  395903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:05:12.455822  395903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:05:12.535319  395903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:05:12.621601  395903 docker.go:234] disabling docker service ...
	I1213 13:05:12.621672  395903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:05:12.640150  395903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:05:12.651700  395903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:05:12.731247  395903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:05:12.809540  395903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:05:12.821297  395903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:05:12.834593  395903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:05:12.834654  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.844419  395903 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:05:12.844476  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.852740  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.860641  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.868531  395903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:05:12.875825  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.883814  395903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.896299  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.904242  395903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:05:12.910917  395903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:05:12.918139  395903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:12.993561  395903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:05:13.126025  395903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:05:13.126104  395903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:05:13.129940  395903 start.go:564] Will wait 60s for crictl version
	I1213 13:05:13.129988  395903 ssh_runner.go:195] Run: which crictl
	I1213 13:05:13.133365  395903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:05:13.157882  395903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:05:13.157972  395903 ssh_runner.go:195] Run: crio --version
	I1213 13:05:13.184740  395903 ssh_runner.go:195] Run: crio --version
	I1213 13:05:13.212660  395903 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 13:05:13.213841  395903 cli_runner.go:164] Run: docker network inspect addons-802674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:05:13.230738  395903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 13:05:13.234604  395903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:05:13.244446  395903 kubeadm.go:884] updating cluster {Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:05:13.244565  395903 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:05:13.244607  395903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:05:13.273510  395903 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:05:13.273527  395903 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:05:13.273567  395903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:05:13.296142  395903 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:05:13.296164  395903 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:05:13.296173  395903 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 13:05:13.296260  395903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-802674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:05:13.296330  395903 ssh_runner.go:195] Run: crio config
	I1213 13:05:13.341122  395903 cni.go:84] Creating CNI manager for ""
	I1213 13:05:13.341147  395903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:05:13.341169  395903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:05:13.341193  395903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-802674 NodeName:addons-802674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:05:13.341327  395903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-802674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:05:13.341388  395903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:05:13.349251  395903 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:05:13.349313  395903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:05:13.356669  395903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 13:05:13.368993  395903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:05:13.383278  395903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 13:05:13.394612  395903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:05:13.397887  395903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:05:13.407004  395903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:13.485308  395903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:05:13.510258  395903 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674 for IP: 192.168.49.2
	I1213 13:05:13.510279  395903 certs.go:195] generating shared ca certs ...
	I1213 13:05:13.510302  395903 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.510441  395903 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:05:13.585456  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt ...
	I1213 13:05:13.585484  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt: {Name:mkbe6268781c2593d1b2a5df3e1ac616a830a0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.585769  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key ...
	I1213 13:05:13.585809  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key: {Name:mked8d99a218a7be1585007abbfdeebc7c1923af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.585979  395903 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:05:13.765995  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt ...
	I1213 13:05:13.766029  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt: {Name:mk5044bf824cba2459cb0a754c1bb7c6e978d3e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.766234  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key ...
	I1213 13:05:13.766250  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key: {Name:mk628e8c4a55d459061863b7406789f36f4492a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.766358  395903 certs.go:257] generating profile certs ...
	I1213 13:05:13.766431  395903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.key
	I1213 13:05:13.766446  395903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt with IP's: []
	I1213 13:05:13.845037  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt ...
	I1213 13:05:13.845066  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: {Name:mk054cfd3343c256548cf41a3693281b626b8888 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.845251  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.key ...
	I1213 13:05:13.845266  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.key: {Name:mkc9cee120f1f6bd3a416b22a88c1d52218ccb68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.845370  395903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677
	I1213 13:05:13.845390  395903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 13:05:13.867580  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677 ...
	I1213 13:05:13.867602  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677: {Name:mk8fe8a5adab02b947191b431981eaaee59403fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.867739  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677 ...
	I1213 13:05:13.867760  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677: {Name:mk766c61ae7dcc9b47f772b9a771c9f092571a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.867870  395903 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt
	I1213 13:05:13.867969  395903 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key
	I1213 13:05:13.868033  395903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key
	I1213 13:05:13.868051  395903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt with IP's: []
	I1213 13:05:13.969028  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt ...
	I1213 13:05:13.969053  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt: {Name:mkd61be8248d1931b4aa61ed6cc43eb7679cae12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.969211  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key ...
	I1213 13:05:13.969226  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key: {Name:mk8fb6e2f1728c6f07435c4e1d84d8766afaf9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.969427  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:05:13.969464  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:05:13.969490  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:05:13.969512  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:05:13.970093  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:05:13.988361  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:05:14.005215  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:05:14.022321  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:05:14.039158  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 13:05:14.055721  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:05:14.072631  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:05:14.088947  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:05:14.105435  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:05:14.123720  395903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:05:14.135803  395903 ssh_runner.go:195] Run: openssl version
	I1213 13:05:14.141604  395903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.148286  395903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:05:14.157148  395903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.160533  395903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.160597  395903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.193841  395903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:05:14.201761  395903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:05:14.208827  395903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:05:14.212191  395903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:05:14.212235  395903 kubeadm.go:401] StartCluster: {Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:14.212316  395903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:05:14.212364  395903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:05:14.238801  395903 cri.go:89] found id: ""
	I1213 13:05:14.238849  395903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:05:14.246513  395903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:05:14.254489  395903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:05:14.254531  395903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:05:14.261552  395903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:05:14.261568  395903 kubeadm.go:158] found existing configuration files:
	
	I1213 13:05:14.261622  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:05:14.268607  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:05:14.268652  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:05:14.275449  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:05:14.282456  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:05:14.282504  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:05:14.289847  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:05:14.297418  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:05:14.297464  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:05:14.304246  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:05:14.311149  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:05:14.311192  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:05:14.318036  395903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:05:14.354521  395903 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 13:05:14.354597  395903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:05:14.388797  395903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:05:14.388882  395903 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:05:14.388925  395903 kubeadm.go:319] OS: Linux
	I1213 13:05:14.388984  395903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:05:14.389045  395903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:05:14.389106  395903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:05:14.389164  395903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:05:14.389269  395903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:05:14.389335  395903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:05:14.389390  395903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:05:14.389442  395903 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:05:14.449420  395903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:05:14.449546  395903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:05:14.449670  395903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:05:14.457463  395903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:05:14.459289  395903 out.go:252]   - Generating certificates and keys ...
	I1213 13:05:14.459400  395903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:05:14.459507  395903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:05:14.815230  395903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:05:15.014501  395903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:05:15.501546  395903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:05:16.158321  395903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:05:16.209825  395903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:05:16.209971  395903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-802674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:05:16.325388  395903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:05:16.325569  395903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-802674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:05:16.879110  395903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:05:17.048064  395903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:05:17.227180  395903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:05:17.227253  395903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:05:17.329825  395903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:05:17.368318  395903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:05:17.655083  395903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:05:17.736743  395903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:05:18.218958  395903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:05:18.219426  395903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:05:18.222733  395903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:05:18.223938  395903 out.go:252]   - Booting up control plane ...
	I1213 13:05:18.224063  395903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:05:18.224177  395903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:05:18.224836  395903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:05:18.238023  395903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:05:18.238150  395903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:05:18.244127  395903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:05:18.244403  395903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:05:18.244439  395903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:05:18.345144  395903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:05:18.345286  395903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:05:19.346704  395903 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001722958s
	I1213 13:05:19.351102  395903 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:05:19.351228  395903 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 13:05:19.351340  395903 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:05:19.351451  395903 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:05:20.355486  395903 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004352532s
	I1213 13:05:20.929932  395903 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.578835844s
	I1213 13:05:22.853003  395903 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501895816s
	I1213 13:05:22.870210  395903 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:05:22.878708  395903 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:05:22.887074  395903 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:05:22.887375  395903 kubeadm.go:319] [mark-control-plane] Marking the node addons-802674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:05:22.894314  395903 kubeadm.go:319] [bootstrap-token] Using token: mcbcc2.gt01yxp6tdtgacjl
	I1213 13:05:22.895341  395903 out.go:252]   - Configuring RBAC rules ...
	I1213 13:05:22.895511  395903 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:05:22.898537  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:05:22.903956  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:05:22.905997  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:05:22.908065  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:05:22.910298  395903 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:05:23.259057  395903 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:05:23.673978  395903 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:05:24.258481  395903 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:05:24.259342  395903 kubeadm.go:319] 
	I1213 13:05:24.259430  395903 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:05:24.259441  395903 kubeadm.go:319] 
	I1213 13:05:24.259525  395903 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:05:24.259536  395903 kubeadm.go:319] 
	I1213 13:05:24.259556  395903 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:05:24.259689  395903 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:05:24.259802  395903 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:05:24.259812  395903 kubeadm.go:319] 
	I1213 13:05:24.259895  395903 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:05:24.259905  395903 kubeadm.go:319] 
	I1213 13:05:24.259949  395903 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:05:24.259955  395903 kubeadm.go:319] 
	I1213 13:05:24.259998  395903 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:05:24.260112  395903 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:05:24.260169  395903 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:05:24.260196  395903 kubeadm.go:319] 
	I1213 13:05:24.260264  395903 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:05:24.260406  395903 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:05:24.260424  395903 kubeadm.go:319] 
	I1213 13:05:24.260540  395903 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mcbcc2.gt01yxp6tdtgacjl \
	I1213 13:05:24.260698  395903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 \
	I1213 13:05:24.260728  395903 kubeadm.go:319] 	--control-plane 
	I1213 13:05:24.260738  395903 kubeadm.go:319] 
	I1213 13:05:24.260880  395903 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:05:24.260892  395903 kubeadm.go:319] 
	I1213 13:05:24.261003  395903 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mcbcc2.gt01yxp6tdtgacjl \
	I1213 13:05:24.261169  395903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 
	I1213 13:05:24.263318  395903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:05:24.263476  395903 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:05:24.263586  395903 cni.go:84] Creating CNI manager for ""
	I1213 13:05:24.263600  395903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:05:24.265018  395903 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 13:05:24.266116  395903 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 13:05:24.270351  395903 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 13:05:24.270368  395903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 13:05:24.282883  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 13:05:24.483344  395903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:05:24.483451  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:24.483451  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-802674 minikube.k8s.io/updated_at=2025_12_13T13_05_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-802674 minikube.k8s.io/primary=true
	I1213 13:05:24.492577  395903 ops.go:34] apiserver oom_adj: -16
	I1213 13:05:24.558156  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:25.058485  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:25.558964  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:26.059074  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:26.559199  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:27.058915  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:27.558294  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:28.058981  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:28.559092  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:29.058198  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:29.122736  395903 kubeadm.go:1114] duration metric: took 4.6393628s to wait for elevateKubeSystemPrivileges
	I1213 13:05:29.122786  395903 kubeadm.go:403] duration metric: took 14.910542769s to StartCluster
	I1213 13:05:29.122815  395903 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:29.122948  395903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:05:29.123341  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:29.123548  395903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:05:29.123563  395903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:05:29.123644  395903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 13:05:29.123765  395903 addons.go:70] Setting yakd=true in profile "addons-802674"
	I1213 13:05:29.123799  395903 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:05:29.123813  395903 addons.go:239] Setting addon yakd=true in "addons-802674"
	I1213 13:05:29.123816  395903 addons.go:70] Setting inspektor-gadget=true in profile "addons-802674"
	I1213 13:05:29.123849  395903 addons.go:70] Setting default-storageclass=true in profile "addons-802674"
	I1213 13:05:29.123851  395903 addons.go:239] Setting addon inspektor-gadget=true in "addons-802674"
	I1213 13:05:29.123857  395903 addons.go:70] Setting storage-provisioner=true in profile "addons-802674"
	I1213 13:05:29.123869  395903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-802674"
	I1213 13:05:29.123876  395903 addons.go:239] Setting addon storage-provisioner=true in "addons-802674"
	I1213 13:05:29.123897  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.123887  395903 addons.go:70] Setting cloud-spanner=true in profile "addons-802674"
	I1213 13:05:29.123911  395903 addons.go:70] Setting metrics-server=true in profile "addons-802674"
	I1213 13:05:29.123934  395903 addons.go:239] Setting addon metrics-server=true in "addons-802674"
	I1213 13:05:29.123945  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.123963  395903 addons.go:239] Setting addon cloud-spanner=true in "addons-802674"
	I1213 13:05:29.123984  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124022  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124197  395903 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-802674"
	I1213 13:05:29.124258  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124298  395903 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-802674"
	I1213 13:05:29.124335  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124468  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124498  395903 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-802674"
	I1213 13:05:29.124514  395903 addons.go:70] Setting volcano=true in profile "addons-802674"
	I1213 13:05:29.124516  395903 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-802674"
	I1213 13:05:29.124526  395903 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-802674"
	I1213 13:05:29.124535  395903 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-802674"
	I1213 13:05:29.124539  395903 addons.go:70] Setting volumesnapshots=true in profile "addons-802674"
	I1213 13:05:29.124549  395903 addons.go:239] Setting addon volumesnapshots=true in "addons-802674"
	I1213 13:05:29.124555  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124569  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124849  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124488  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124988  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.125066  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.125828  395903 out.go:179] * Verifying Kubernetes components...
	I1213 13:05:29.123850  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.126471  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124880  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124499  395903 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-802674"
	I1213 13:05:29.127522  395903 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-802674"
	I1213 13:05:29.127553  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.128080  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.128577  395903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:29.124490  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124529  395903 addons.go:239] Setting addon volcano=true in "addons-802674"
	I1213 13:05:29.128941  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.129412  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124496  395903 addons.go:70] Setting gcp-auth=true in profile "addons-802674"
	I1213 13:05:29.130008  395903 mustload.go:66] Loading cluster: addons-802674
	I1213 13:05:29.124490  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124502  395903 addons.go:70] Setting ingress=true in profile "addons-802674"
	I1213 13:05:29.130911  395903 addons.go:239] Setting addon ingress=true in "addons-802674"
	I1213 13:05:29.130953  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124507  395903 addons.go:70] Setting ingress-dns=true in profile "addons-802674"
	I1213 13:05:29.134318  395903 addons.go:239] Setting addon ingress-dns=true in "addons-802674"
	I1213 13:05:29.134362  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.123846  395903 addons.go:70] Setting registry-creds=true in profile "addons-802674"
	I1213 13:05:29.134993  395903 addons.go:239] Setting addon registry-creds=true in "addons-802674"
	I1213 13:05:29.135024  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124505  395903 addons.go:70] Setting registry=true in profile "addons-802674"
	I1213 13:05:29.135167  395903 addons.go:239] Setting addon registry=true in "addons-802674"
	I1213 13:05:29.135197  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.135501  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.136091  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.137407  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.138693  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.144462  395903 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:05:29.144762  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.172576  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 13:05:29.173623  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 13:05:29.173690  395903 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 13:05:29.173801  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	W1213 13:05:29.204423  395903 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 13:05:29.210311  395903 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 13:05:29.211457  395903 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:05:29.211478  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 13:05:29.211691  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.215244  395903 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 13:05:29.216270  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 13:05:29.216305  395903 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:05:29.216768  395903 addons.go:239] Setting addon default-storageclass=true in "addons-802674"
	I1213 13:05:29.216931  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.216322  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 13:05:29.217290  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.217844  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.223755  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 13:05:29.224143  395903 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 13:05:29.225242  395903 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:05:29.225260  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 13:05:29.225329  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.226990  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 13:05:29.228045  395903 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 13:05:29.229174  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 13:05:29.229241  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 13:05:29.229252  395903 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 13:05:29.229315  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.230874  395903 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-802674"
	I1213 13:05:29.230924  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.230978  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 13:05:29.231426  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.232657  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 13:05:29.232703  395903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:05:29.233729  395903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:05:29.233808  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:05:29.234437  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.237630  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 13:05:29.238957  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 13:05:29.238987  395903 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 13:05:29.239040  395903 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 13:05:29.240143  395903 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:05:29.240162  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 13:05:29.240217  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.240322  395903 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 13:05:29.240329  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 13:05:29.240357  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.240684  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 13:05:29.240729  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 13:05:29.240790  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.246676  395903 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 13:05:29.247790  395903 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 13:05:29.248760  395903 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 13:05:29.248837  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 13:05:29.248912  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.253461  395903 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 13:05:29.254397  395903 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:05:29.254448  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 13:05:29.254539  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.257575  395903 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 13:05:29.258483  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 13:05:29.258502  395903 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 13:05:29.258555  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.263247  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:29.264724  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:29.265900  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 13:05:29.268180  395903 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:05:29.268202  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 13:05:29.268262  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.271322  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.277239  395903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:05:29.287070  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.293634  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.297993  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.299634  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.299982  395903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:05:29.299998  395903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:05:29.300071  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.302301  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.317844  395903 out.go:179]   - Using image docker.io/busybox:stable
	I1213 13:05:29.318975  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.319050  395903 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 13:05:29.319895  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.320146  395903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:05:29.320169  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 13:05:29.320230  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.324007  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.325502  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.326606  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.327963  395903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:05:29.330987  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.331484  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.334451  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.361110  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.364577  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	W1213 13:05:29.364989  395903 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 13:05:29.365023  395903 retry.go:31] will retry after 256.098095ms: ssh: handshake failed: EOF
	W1213 13:05:29.365444  395903 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 13:05:29.365456  395903 retry.go:31] will retry after 341.621965ms: ssh: handshake failed: EOF
	I1213 13:05:29.469880  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 13:05:29.469905  395903 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 13:05:29.477882  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:05:29.485210  395903 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 13:05:29.485237  395903 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 13:05:29.487425  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:05:29.497146  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:05:29.507221  395903 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 13:05:29.507255  395903 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 13:05:29.507410  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:05:29.508044  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 13:05:29.508064  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 13:05:29.509580  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 13:05:29.509595  395903 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 13:05:29.512578  395903 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 13:05:29.512649  395903 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 13:05:29.539221  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:05:29.540683  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:05:29.547094  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 13:05:29.547120  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 13:05:29.548241  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 13:05:29.548281  395903 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 13:05:29.550930  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 13:05:29.551972  395903 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 13:05:29.551990  395903 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 13:05:29.562028  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 13:05:29.562051  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 13:05:29.564636  395903 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:05:29.564653  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 13:05:29.568408  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:05:29.601185  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:05:29.601210  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 13:05:29.605884  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 13:05:29.605906  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 13:05:29.607158  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 13:05:29.607175  395903 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 13:05:29.610236  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 13:05:29.610296  395903 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 13:05:29.611990  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:05:29.651390  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:05:29.651420  395903 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 13:05:29.653096  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 13:05:29.653336  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 13:05:29.655405  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:05:29.663041  395903 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:29.663067  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 13:05:29.675602  395903 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 13:05:29.678060  395903 node_ready.go:35] waiting up to 6m0s for node "addons-802674" to be "Ready" ...
	I1213 13:05:29.681198  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:05:29.707993  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 13:05:29.708179  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 13:05:29.726692  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:29.777839  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 13:05:29.777977  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 13:05:29.845269  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 13:05:29.845363  395903 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 13:05:29.847321  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:05:29.940448  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 13:05:29.940469  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 13:05:29.963096  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:05:29.990226  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 13:05:29.990322  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 13:05:30.026392  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:05:30.026422  395903 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 13:05:30.083300  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:05:30.182602  395903 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-802674" context rescaled to 1 replicas
	I1213 13:05:30.688005  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.1472861s)
	I1213 13:05:30.688042  395903 addons.go:495] Verifying addon ingress=true in "addons-802674"
	I1213 13:05:30.688076  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.137111414s)
	I1213 13:05:30.688232  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.119797975s)
	I1213 13:05:30.688277  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.076265801s)
	I1213 13:05:30.688299  395903 addons.go:495] Verifying addon registry=true in "addons-802674"
	I1213 13:05:30.688409  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.00714422s)
	I1213 13:05:30.688433  395903 addons.go:495] Verifying addon metrics-server=true in "addons-802674"
	I1213 13:05:30.688346  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.032909298s)
	I1213 13:05:30.690246  395903 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-802674 service yakd-dashboard -n yakd-dashboard
	
	I1213 13:05:30.690248  395903 out.go:179] * Verifying registry addon...
	I1213 13:05:30.690255  395903 out.go:179] * Verifying ingress addon...
	I1213 13:05:30.692651  395903 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 13:05:30.692651  395903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 13:05:30.695080  395903 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 13:05:30.695172  395903 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:05:30.695190  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:31.102393  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.375605615s)
	I1213 13:05:31.102448  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255018653s)
	W1213 13:05:31.102454  395903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:05:31.102484  395903 retry.go:31] will retry after 372.531156ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:05:31.102522  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.139394962s)
	I1213 13:05:31.102830  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.019488566s)
	I1213 13:05:31.102863  395903 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-802674"
	I1213 13:05:31.104421  395903 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 13:05:31.108397  395903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 13:05:31.111456  395903 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	W1213 13:05:31.111480  395903 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1213 13:05:31.111480  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:31.195374  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:31.195533  395903 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 13:05:31.195551  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:31.475305  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:31.612419  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:31.682224  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:31.695992  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:31.696250  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:32.111177  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:32.195713  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:32.195937  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:32.612358  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:32.695128  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:32.695165  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:33.111381  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:33.196038  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:33.196038  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:33.611117  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:33.695884  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:33.696009  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:33.940181  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.464812412s)
	I1213 13:05:34.111362  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:34.182044  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:34.195202  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:34.195337  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:34.612312  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:34.695372  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:34.695523  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:35.122213  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:35.196224  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:35.196437  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:35.613537  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:35.695050  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:35.695373  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:36.111049  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:36.195494  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:36.195648  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:36.611747  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:36.682328  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:36.695367  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:36.695527  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:36.907705  395903 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 13:05:36.907769  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:36.925766  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:37.026412  395903 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 13:05:37.038570  395903 addons.go:239] Setting addon gcp-auth=true in "addons-802674"
	I1213 13:05:37.038630  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:37.039025  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:37.055946  395903 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 13:05:37.056006  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:37.072483  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:37.112021  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:37.164906  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:37.166089  395903 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 13:05:37.167021  395903 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 13:05:37.167039  395903 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 13:05:37.179676  395903 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 13:05:37.179696  395903 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 13:05:37.192068  395903 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:05:37.192085  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 13:05:37.195079  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:37.195201  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:37.205029  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:05:37.490706  395903 addons.go:495] Verifying addon gcp-auth=true in "addons-802674"
	I1213 13:05:37.492096  395903 out.go:179] * Verifying gcp-auth addon...
	I1213 13:05:37.494573  395903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 13:05:37.496697  395903 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 13:05:37.496715  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:37.611804  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:37.695711  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:37.695728  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:37.997549  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:38.111146  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:38.196056  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:38.196166  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:38.498069  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:38.612271  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:38.696024  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:38.696241  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:38.998211  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:39.111913  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:39.181513  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:39.195815  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:39.196033  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:39.498078  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:39.611876  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:39.695746  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:39.695939  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:39.997692  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:40.111317  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:40.195377  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:40.195516  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:40.498737  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:40.611498  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:40.695165  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:40.695619  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:40.998254  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:41.111877  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:41.195689  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:41.195972  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:41.497901  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:41.611393  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:41.681846  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:41.696249  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:41.696343  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:41.998056  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:42.111543  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:42.195965  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:42.196136  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:42.497803  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:42.611398  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:42.695375  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:42.695599  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:42.997346  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:43.112108  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:43.195955  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:43.196111  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:43.498325  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:43.611920  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:43.695827  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:43.695918  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:43.997547  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:44.111444  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:44.182082  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:44.195663  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:44.195906  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:44.497399  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:44.612080  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:44.695624  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:44.695719  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:44.997482  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:45.111107  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:45.195897  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:45.196081  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:45.498074  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:45.611810  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:45.695383  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:45.695530  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:45.998510  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:46.111308  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:46.195225  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:46.195295  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:46.498375  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:46.612280  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:46.681821  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:46.695220  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:46.695272  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:46.998192  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:47.111878  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:47.195674  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:47.195823  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:47.497798  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:47.611441  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:47.695592  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:47.695632  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:47.998482  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:48.111166  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:48.196260  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:48.196306  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:48.498109  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:48.611843  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:48.695658  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:48.695845  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:48.997583  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:49.111365  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:49.181856  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:49.195104  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:49.195221  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:49.498108  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:49.612089  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:49.695355  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:49.695397  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:49.997314  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:50.111900  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:50.195621  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:50.195886  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:50.497982  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:50.611892  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:50.695797  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:50.695975  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:50.997855  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:51.111578  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:51.182071  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:51.195355  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:51.195520  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:51.498350  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:51.611195  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:51.694960  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:51.695182  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:51.998352  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:52.112115  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:52.195315  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:52.195434  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:52.498454  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:52.611102  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:52.695672  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:52.695740  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:52.997399  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:53.110769  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:53.195289  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:53.195337  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:53.498414  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:53.612232  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:53.682043  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:53.695344  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:53.695573  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:53.997651  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:54.111381  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:54.194968  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:54.195201  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:54.497898  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:54.611661  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:54.695550  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:54.695826  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:54.997548  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:55.111128  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:55.195089  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:55.195288  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:55.498092  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:55.611930  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:55.695877  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:55.695966  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:55.997705  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:56.111106  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:56.181731  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:56.195889  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:56.196152  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:56.497515  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:56.611148  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:56.694973  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:56.695202  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:56.998052  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:57.111722  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:57.195416  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:57.195544  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:57.498272  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:57.611935  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:57.695896  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:57.696083  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:57.997466  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:58.111189  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:58.182112  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:58.195495  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:58.195690  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:58.497248  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:58.611965  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:58.695916  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:58.696003  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:58.998106  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:59.111897  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:59.195850  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:59.196044  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:59.498039  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:59.611855  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:59.695613  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:59.695818  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:59.997670  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:00.111410  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:00.195382  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:00.195614  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:00.497553  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:00.611074  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:00.681522  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:00.695716  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:00.695974  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:00.997278  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:01.111672  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:01.195331  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:01.195392  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:01.498345  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:01.611129  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:01.696175  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:01.696243  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:01.997995  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:02.111793  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.195494  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:02.195592  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:02.498206  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:02.612027  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.695652  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:02.695881  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:02.997603  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:03.111149  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:03.181465  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:03.195734  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:03.195852  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:03.497938  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:03.611720  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:03.695462  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:03.695506  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:03.998207  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:04.111876  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:04.195448  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:04.195698  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:04.497797  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:04.611584  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:04.695662  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:04.695762  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:04.997765  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:05.111418  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:05.181961  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:05.195300  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:05.195347  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:05.498230  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:05.611940  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:05.695616  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:05.697563  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:05.997325  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:06.111995  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:06.195717  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:06.195907  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.497620  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:06.611393  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:06.696082  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:06.696181  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.998220  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:07.111725  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:07.182198  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:07.195764  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:07.196008  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:07.498466  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:07.611096  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:07.696322  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:07.696591  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:07.997084  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:08.111824  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:08.195846  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:08.196147  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:08.497709  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:08.611299  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:08.695233  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:08.695452  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:08.997413  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:09.111977  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:09.195808  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:09.196022  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.498042  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:09.611614  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:09.682233  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:09.695591  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:09.695856  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.997660  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:10.111324  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:10.181969  395903 node_ready.go:49] node "addons-802674" is "Ready"
	I1213 13:06:10.182014  395903 node_ready.go:38] duration metric: took 40.503331317s for node "addons-802674" to be "Ready" ...
	I1213 13:06:10.182036  395903 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:06:10.182108  395903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:06:10.198326  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:10.198400  395903 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:06:10.198423  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:10.203127  395903 api_server.go:72] duration metric: took 41.079530905s to wait for apiserver process to appear ...
	I1213 13:06:10.203153  395903 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:06:10.203183  395903 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 13:06:10.209790  395903 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 13:06:10.210698  395903 api_server.go:141] control plane version: v1.34.2
	I1213 13:06:10.210725  395903 api_server.go:131] duration metric: took 7.563433ms to wait for apiserver health ...
	I1213 13:06:10.210739  395903 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:06:10.219632  395903 system_pods.go:59] 20 kube-system pods found
	I1213 13:06:10.219658  395903 system_pods.go:61] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending
	I1213 13:06:10.219676  395903 system_pods.go:61] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.219680  395903 system_pods.go:61] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending
	I1213 13:06:10.219686  395903 system_pods.go:61] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending
	I1213 13:06:10.219689  395903 system_pods.go:61] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending
	I1213 13:06:10.219693  395903 system_pods.go:61] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.219696  395903 system_pods.go:61] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.219700  395903 system_pods.go:61] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.219709  395903 system_pods.go:61] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.219714  395903 system_pods.go:61] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending
	I1213 13:06:10.219719  395903 system_pods.go:61] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.219726  395903 system_pods.go:61] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.219733  395903 system_pods.go:61] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending
	I1213 13:06:10.219741  395903 system_pods.go:61] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending
	I1213 13:06:10.219748  395903 system_pods.go:61] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.219789  395903 system_pods.go:61] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.219801  395903 system_pods.go:61] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending
	I1213 13:06:10.219807  395903 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending
	I1213 13:06:10.219812  395903 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending
	I1213 13:06:10.219817  395903 system_pods.go:61] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending
	I1213 13:06:10.219826  395903 system_pods.go:74] duration metric: took 9.079384ms to wait for pod list to return data ...
	I1213 13:06:10.219846  395903 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:06:10.221630  395903 default_sa.go:45] found service account: "default"
	I1213 13:06:10.221657  395903 default_sa.go:55] duration metric: took 1.804296ms for default service account to be created ...
	I1213 13:06:10.221668  395903 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:06:10.230912  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.230942  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending
	I1213 13:06:10.230956  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.230963  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending
	I1213 13:06:10.230970  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending
	I1213 13:06:10.230975  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending
	I1213 13:06:10.230985  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.230991  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.231000  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.231005  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.231018  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.231027  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.231033  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.231041  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending
	I1213 13:06:10.231046  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending
	I1213 13:06:10.231054  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.231066  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.231071  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending
	I1213 13:06:10.231077  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending
	I1213 13:06:10.231084  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending
	I1213 13:06:10.231088  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending
	I1213 13:06:10.231108  395903 retry.go:31] will retry after 296.337411ms: missing components: kube-dns
	I1213 13:06:10.499281  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:10.603585  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.603629  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:10.603643  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.603654  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:10.603668  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:10.603689  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:10.603701  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.603709  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.603717  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.603726  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.603735  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.603740  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.603746  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.603753  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:10.603762  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:10.603770  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.603798  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.603805  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:10.603820  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.603834  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.603841  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:10.603869  395903 retry.go:31] will retry after 238.442167ms: missing components: kube-dns
	I1213 13:06:10.699072  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:10.699164  395903 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 13:06:10.699181  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:10.699182  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:10.847374  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.847408  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:10.847417  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.847424  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:10.847430  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:10.847436  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:10.847440  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.847446  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.847451  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.847456  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.847461  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.847467  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.847471  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.847475  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:10.847480  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:10.847488  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.847493  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.847500  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:10.847512  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.847518  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.847524  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:10.847540  395903 retry.go:31] will retry after 354.737324ms: missing components: kube-dns
	I1213 13:06:10.997888  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:11.112637  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:11.197150  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:11.197179  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:11.207243  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:11.207278  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:11.207287  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:11.207298  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:11.207306  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:11.207314  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:11.207320  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:11.207327  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:11.207340  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:11.207346  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:11.207357  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:11.207366  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:11.207372  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:11.207383  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:11.207391  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:11.207400  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:11.207414  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:11.207422  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:11.207430  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.207437  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.207444  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:11.207467  395903 retry.go:31] will retry after 510.78588ms: missing components: kube-dns
	I1213 13:06:11.497836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:11.612215  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:11.696603  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:11.697183  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:11.723320  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:11.723361  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:11.723371  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Running
	I1213 13:06:11.723381  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:11.723389  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:11.723398  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:11.723405  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:11.723412  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:11.723429  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:11.723436  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:11.723586  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:11.723600  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:11.723607  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:11.723623  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:11.723635  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:11.723646  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:11.723656  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:11.723667  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:11.723676  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.723685  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.723694  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Running
	I1213 13:06:11.723706  395903 system_pods.go:126] duration metric: took 1.50203012s to wait for k8s-apps to be running ...
	I1213 13:06:11.723720  395903 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:06:11.723823  395903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:06:11.739971  395903 system_svc.go:56] duration metric: took 16.243997ms WaitForService to wait for kubelet
	I1213 13:06:11.740002  395903 kubeadm.go:587] duration metric: took 42.616407317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:06:11.740024  395903 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:06:11.742932  395903 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:06:11.742961  395903 node_conditions.go:123] node cpu capacity is 8
	I1213 13:06:11.742979  395903 node_conditions.go:105] duration metric: took 2.948877ms to run NodePressure ...
	I1213 13:06:11.742997  395903 start.go:242] waiting for startup goroutines ...
	I1213 13:06:12.002765  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:12.112072  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:12.196010  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:12.196175  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:12.498958  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:12.612187  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:12.696152  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:12.696203  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:12.998528  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:13.111646  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:13.196142  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:13.196180  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:13.499037  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:13.615112  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:13.697442  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:13.697723  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:13.999214  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:14.113146  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:14.196518  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:14.197025  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:14.498328  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:14.612630  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:14.713534  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:14.713687  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:14.998351  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:15.112002  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:15.196057  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:15.196119  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:15.498248  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:15.613053  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:15.696124  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:15.696299  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:15.998988  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:16.112447  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:16.196651  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:16.196763  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:16.498190  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:16.612750  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:16.774262  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:16.774280  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:16.998789  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:17.112279  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.195922  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:17.195922  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:17.498287  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:17.612698  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.696596  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:17.696668  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:17.998468  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:18.112845  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:18.197074  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:18.197195  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:18.498815  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:18.612310  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:18.696670  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:18.696709  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:18.999106  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:19.114615  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:19.196538  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:19.196612  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.498092  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:19.611880  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:19.695624  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:19.695880  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.997579  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:20.111994  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:20.196520  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:20.196829  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:20.498383  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:20.612298  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:20.696097  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:20.696218  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.033230  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:21.112226  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:21.196559  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.196640  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:21.497448  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:21.613025  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:21.696099  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:21.696499  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.999695  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:22.111974  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:22.212221  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:22.212264  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:22.499024  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:22.612572  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:22.696605  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:22.696888  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:22.998338  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:23.113037  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:23.196040  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:23.196113  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:23.498611  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:23.611493  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:23.696226  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:23.696374  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:23.998764  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:24.112210  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:24.196478  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:24.196521  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:24.497821  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:24.612513  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:24.696795  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:24.696823  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:24.998836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:25.112021  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:25.196282  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:25.196415  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:25.498244  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:25.612163  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:25.695738  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:25.696037  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:25.997990  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:26.112005  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:26.195752  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:26.195821  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:26.497983  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:26.612554  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:26.706583  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:26.706638  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:26.998263  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:27.114273  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:27.196357  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:27.196395  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:27.498599  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:27.611770  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:27.695450  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:27.695492  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:27.998756  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:28.111630  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.196430  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:28.196641  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:28.497910  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:28.612336  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.696420  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:28.696523  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:28.999406  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:29.113159  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:29.196473  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:29.196524  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.498006  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:29.612644  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:29.696859  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:29.696925  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.997869  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:30.111933  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:30.195974  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:30.196120  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:30.498380  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:30.613034  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:30.713492  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:30.713520  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.004098  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:31.112701  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:31.196726  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.196759  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:31.498073  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:31.612386  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:31.713276  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:31.713328  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.998810  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.111790  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:32.196470  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.196602  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.499133  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.612232  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:32.696287  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.696412  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.999334  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:33.112657  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:33.196742  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:33.196876  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:33.497682  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:33.611626  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:33.696463  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:33.696474  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:33.999629  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:34.111900  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:34.197060  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:34.197261  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:34.498973  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:34.612192  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:34.696048  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:34.696141  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:34.998730  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:35.114184  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:35.198357  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:35.199693  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:35.499849  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:35.612880  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:35.697947  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:35.698138  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:35.998901  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:36.114017  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:36.196726  395903 kapi.go:107] duration metric: took 1m5.504070223s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 13:06:36.196823  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:36.497747  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:36.611847  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:36.698150  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:36.998255  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.112795  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:37.196905  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.517019  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.611906  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:37.695732  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.997836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:38.112505  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:38.196871  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:38.497632  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:38.611623  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:38.696716  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:38.998156  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:39.112185  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:39.196587  395903 kapi.go:107] duration metric: took 1m8.503933393s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 13:06:39.498259  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:39.612162  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.002836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:40.114414  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.498934  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:40.612298  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.999043  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.112500  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:41.497677  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.611936  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:41.999847  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:42.112856  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:42.497628  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:42.612017  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:42.998450  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:43.112528  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:43.498101  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:43.748308  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:43.998387  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:44.112558  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:44.498332  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:44.612759  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:44.998218  395903 kapi.go:107] duration metric: took 1m7.503669997s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 13:06:44.999896  395903 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-802674 cluster.
	I1213 13:06:45.001269  395903 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 13:06:45.002589  395903 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 13:06:45.113208  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:45.611942  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:46.113218  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:46.611724  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:47.112168  395903 kapi.go:107] duration metric: took 1m16.003768664s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 13:06:47.113752  395903 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1213 13:06:47.115166  395903 addons.go:530] duration metric: took 1m17.991519282s for enable addons: enabled=[nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner cloud-spanner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1213 13:06:47.115212  395903 start.go:247] waiting for cluster config update ...
	I1213 13:06:47.115232  395903 start.go:256] writing updated cluster config ...
	I1213 13:06:47.115485  395903 ssh_runner.go:195] Run: rm -f paused
	I1213 13:06:47.119601  395903 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:06:47.122462  395903 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bqhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.126247  395903 pod_ready.go:94] pod "coredns-66bc5c9577-bqhwx" is "Ready"
	I1213 13:06:47.126266  395903 pod_ready.go:86] duration metric: took 3.783169ms for pod "coredns-66bc5c9577-bqhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.128163  395903 pod_ready.go:83] waiting for pod "etcd-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.131322  395903 pod_ready.go:94] pod "etcd-addons-802674" is "Ready"
	I1213 13:06:47.131343  395903 pod_ready.go:86] duration metric: took 3.161341ms for pod "etcd-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.132975  395903 pod_ready.go:83] waiting for pod "kube-apiserver-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.136230  395903 pod_ready.go:94] pod "kube-apiserver-addons-802674" is "Ready"
	I1213 13:06:47.136249  395903 pod_ready.go:86] duration metric: took 3.254569ms for pod "kube-apiserver-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.137934  395903 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.523552  395903 pod_ready.go:94] pod "kube-controller-manager-addons-802674" is "Ready"
	I1213 13:06:47.523587  395903 pod_ready.go:86] duration metric: took 385.634772ms for pod "kube-controller-manager-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.723844  395903 pod_ready.go:83] waiting for pod "kube-proxy-2ss46" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.123600  395903 pod_ready.go:94] pod "kube-proxy-2ss46" is "Ready"
	I1213 13:06:48.123630  395903 pod_ready.go:86] duration metric: took 399.760698ms for pod "kube-proxy-2ss46" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.323829  395903 pod_ready.go:83] waiting for pod "kube-scheduler-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.723833  395903 pod_ready.go:94] pod "kube-scheduler-addons-802674" is "Ready"
	I1213 13:06:48.723871  395903 pod_ready.go:86] duration metric: took 400.014637ms for pod "kube-scheduler-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.723889  395903 pod_ready.go:40] duration metric: took 1.604253671s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:06:48.769108  395903 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:06:48.771066  395903 out.go:179] * Done! kubectl is now configured to use "addons-802674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.719552542Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-cnjm2/POD" id=acb60b30-9135-4f8c-908a-e4ef04338c61 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.719639854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.726645348Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cnjm2 Namespace:default ID:6ea3edebf2a1b3fc4b35b8e18cd7c1f569901db67ad155d03771f8939551111f UID:c271d086-094c-4fde-ac55-fc63cff492ce NetNS:/var/run/netns/f15ea4d6-44a8-46ea-a50d-407145230c48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000550e50}] Aliases:map[]}"
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.726672636Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-cnjm2 to CNI network \"kindnet\" (type=ptp)"
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.736473027Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cnjm2 Namespace:default ID:6ea3edebf2a1b3fc4b35b8e18cd7c1f569901db67ad155d03771f8939551111f UID:c271d086-094c-4fde-ac55-fc63cff492ce NetNS:/var/run/netns/f15ea4d6-44a8-46ea-a50d-407145230c48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000550e50}] Aliases:map[]}"
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.736612278Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-cnjm2 for CNI network kindnet (type=ptp)"
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.737448798Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.738355279Z" level=info msg="Ran pod sandbox 6ea3edebf2a1b3fc4b35b8e18cd7c1f569901db67ad155d03771f8939551111f with infra container: default/hello-world-app-5d498dc89-cnjm2/POD" id=acb60b30-9135-4f8c-908a-e4ef04338c61 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.739730327Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7d481403-243b-4293-a784-f5f46ad71434 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.739892122Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7d481403-243b-4293-a784-f5f46ad71434 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.739927231Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7d481403-243b-4293-a784-f5f46ad71434 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.740609594Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=09fad107-28a9-4982-83be-cebec9b27856 name=/runtime.v1.ImageService/PullImage
	Dec 13 13:09:31 addons-802674 crio[777]: time="2025-12-13T13:09:31.745550474Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.532405458Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=09fad107-28a9-4982-83be-cebec9b27856 name=/runtime.v1.ImageService/PullImage
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.533039971Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=db6c2bc5-3d0b-46d6-8e27-9e32d07ab2a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.534452291Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=61ce00d4-0a95-4d1a-835a-4ad194aec1e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.538048209Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-cnjm2/hello-world-app" id=ce4c9ca6-1675-4436-9cc6-e5f6570426a3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.538194676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.545517951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.545797671Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d74a005c37cdef179c62c31bca1c0260e3ae7e27a7b45c1cfc757905d85b0a29/merged/etc/passwd: no such file or directory"
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.545867311Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d74a005c37cdef179c62c31bca1c0260e3ae7e27a7b45c1cfc757905d85b0a29/merged/etc/group: no such file or directory"
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.546172845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.576213264Z" level=info msg="Created container 7ebc00162bee8888a16a08d18ee005151f8d9d7d35497d80e7706f11673db37c: default/hello-world-app-5d498dc89-cnjm2/hello-world-app" id=ce4c9ca6-1675-4436-9cc6-e5f6570426a3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.576938581Z" level=info msg="Starting container: 7ebc00162bee8888a16a08d18ee005151f8d9d7d35497d80e7706f11673db37c" id=4e367b71-72e4-4051-9e59-731d3886b5d1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:09:32 addons-802674 crio[777]: time="2025-12-13T13:09:32.578678255Z" level=info msg="Started container" PID=9521 containerID=7ebc00162bee8888a16a08d18ee005151f8d9d7d35497d80e7706f11673db37c description=default/hello-world-app-5d498dc89-cnjm2/hello-world-app id=4e367b71-72e4-4051-9e59-731d3886b5d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ea3edebf2a1b3fc4b35b8e18cd7c1f569901db67ad155d03771f8939551111f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	7ebc00162bee8       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   6ea3edebf2a1b       hello-world-app-5d498dc89-cnjm2             default
	f7c4229a576fe       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   ca3112429c4fd       registry-creds-764b6fb674-vppgx             kube-system
	fc1578c3383f9       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   c437daeca19cd       nginx                                       default
	76bc6f0c50f5e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   3b163c12e079d       busybox                                     default
	efa46cf269b56       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	ae27723662583       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	25c2ccc8d56eb       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	500e4d8d6d926       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   f5e99866f8e93       gcp-auth-78565c9fb4-x58fn                   gcp-auth
	00b38c263e000       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	75d9ddc062ec2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   1b12f8c73b1a6       gadget-2rht9                                gadget
	6df323a2878de       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	909795cf069ef       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   31c05640213ab       ingress-nginx-controller-85d4c799dd-pqrjc   ingress-nginx
	c5db025aa30e9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   3b77c60f1e6f2       registry-proxy-q4bmk                        kube-system
	263b6770119de       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	f08ae0fc41016       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   b524b8787f620       amd-gpu-device-plugin-jrjdp                 kube-system
	6d85d43816c0e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   d1c120534dc2e       csi-hostpath-attacher-0                     kube-system
	40aee451d49aa       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   f9f4611822449       snapshot-controller-7d9fbc56b8-mqxdv        kube-system
	7f147ccf5e501       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   2aea74ac27707       nvidia-device-plugin-daemonset-bldsd        kube-system
	a9051d728dbfa       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   108b51d55bc13       csi-hostpath-resizer-0                      kube-system
	372b2eedcace7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              patch                                    0                   ef0d7f8d0e3fb       ingress-nginx-admission-patch-kh6b6         ingress-nginx
	fc7d97af030f5       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   158c513c1741c       metrics-server-85b7d694d7-lmm9f             kube-system
	f4ac5ed0bb71a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   07815e046b576       snapshot-controller-7d9fbc56b8-nzsxs        kube-system
	04be5533939ef       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   ff5d556899332       yakd-dashboard-5ff678cb9-l5tbt              yakd-dashboard
	5851ed168deef       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   b8ec7fbdea730       local-path-provisioner-648f6765c9-5vk9k     local-path-storage
	a02eaaa05e8f3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   ebee5b47a3296       ingress-nginx-admission-create-4vxk5        ingress-nginx
	bb2165f7660fc       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   3165a87ead941       registry-6b586f9694-8nh6x                   kube-system
	be21f9e65e565       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   e0908735a3d11       kube-ingress-dns-minikube                   kube-system
	8da50799f91fb       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   8efd9c2d5215e       cloud-spanner-emulator-5bdddb765-fpvjg      default
	810cfaaa4b781       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   f21678d9ca1e5       storage-provisioner                         kube-system
	5eca19a8b70c2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   5a22131de8640       coredns-66bc5c9577-bqhwx                    kube-system
	d50cb67d5dec7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   3c089051a3fb1       kube-proxy-2ss46                            kube-system
	b6315f71701be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   1f55378062026       kindnet-fctx2                               kube-system
	610b806094f38       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   730c82d0ec8ef       etcd-addons-802674                          kube-system
	2a7f427a075b6       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   01befcabcdb91       kube-apiserver-addons-802674                kube-system
	9b7e546540c7c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   af5edd33b4d3e       kube-controller-manager-addons-802674       kube-system
	dba035f34dd51       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   9048f2bd642b0       kube-scheduler-addons-802674                kube-system
	
	
	==> coredns [5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6] <==
	[INFO] 10.244.0.22:59667 - 11691 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009607s
	[INFO] 10.244.0.22:35843 - 34063 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004741232s
	[INFO] 10.244.0.22:48981 - 34870 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004852956s
	[INFO] 10.244.0.22:41535 - 19514 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00396088s
	[INFO] 10.244.0.22:48154 - 29342 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005100534s
	[INFO] 10.244.0.22:37334 - 48135 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00472467s
	[INFO] 10.244.0.22:47750 - 28262 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005839182s
	[INFO] 10.244.0.22:44808 - 26686 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000863585s
	[INFO] 10.244.0.22:41672 - 22449 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00133782s
	[INFO] 10.244.0.25:57793 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000237277s
	[INFO] 10.244.0.25:36075 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155906s
	[INFO] 10.244.0.26:40345 - 58089 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000183329s
	[INFO] 10.244.0.26:35436 - 6253 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000257443s
	[INFO] 10.244.0.26:44024 - 62172 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000150822s
	[INFO] 10.244.0.26:37519 - 34482 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000206888s
	[INFO] 10.244.0.26:48998 - 32633 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000118816s
	[INFO] 10.244.0.26:42401 - 15841 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000164291s
	[INFO] 10.244.0.26:56336 - 33542 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006805409s
	[INFO] 10.244.0.26:54458 - 59811 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006992404s
	[INFO] 10.244.0.26:54078 - 36621 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004566384s
	[INFO] 10.244.0.26:43220 - 35971 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.0048428s
	[INFO] 10.244.0.26:48301 - 62707 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006061311s
	[INFO] 10.244.0.26:35799 - 32164 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006857524s
	[INFO] 10.244.0.26:49729 - 36241 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001554792s
	[INFO] 10.244.0.26:52138 - 58754 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002971194s
	
	
	==> describe nodes <==
	Name:               addons-802674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-802674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=addons-802674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_05_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-802674
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-802674"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:05:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-802674
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:09:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:07:56 +0000   Sat, 13 Dec 2025 13:05:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:07:56 +0000   Sat, 13 Dec 2025 13:05:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:07:56 +0000   Sat, 13 Dec 2025 13:05:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:07:56 +0000   Sat, 13 Dec 2025 13:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-802674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                919a78b6-9542-415b-ad40-dc5df4183c76
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  default                     cloud-spanner-emulator-5bdddb765-fpvjg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  default                     hello-world-app-5d498dc89-cnjm2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-2rht9                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  gcp-auth                    gcp-auth-78565c9fb4-x58fn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-pqrjc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m2s
	  kube-system                 amd-gpu-device-plugin-jrjdp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 coredns-66bc5c9577-bqhwx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpathplugin-hzzp2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 etcd-addons-802674                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m10s
	  kube-system                 kindnet-fctx2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m3s
	  kube-system                 kube-apiserver-addons-802674                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-addons-802674        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-2ss46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-addons-802674                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 metrics-server-85b7d694d7-lmm9f              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m2s
	  kube-system                 nvidia-device-plugin-daemonset-bldsd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 registry-6b586f9694-8nh6x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-creds-764b6fb674-vppgx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-proxy-q4bmk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-mqxdv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-nzsxs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  local-path-storage          local-path-provisioner-648f6765c9-5vk9k      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-l5tbt               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node addons-802674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node addons-802674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x8 over 4m14s)  kubelet          Node addons-802674 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet          Node addons-802674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet          Node addons-802674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet          Node addons-802674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m5s                   node-controller  Node addons-802674 event: Registered Node addons-802674 in Controller
	  Normal  NodeReady                3m22s                  kubelet          Node addons-802674 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea b7 dd 32 fb 08 08 06
	[  +0.000396] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be c4 f7 a4 8d 16 08 06
	[Dec13 13:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.009708] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.024845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.022879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.024907] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +2.047757] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +4.030610] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +8.255132] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[Dec13 13:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	
	
	==> etcd [610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d] <==
	{"level":"warn","ts":"2025-12-13T13:05:20.462711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.469414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.484565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.490610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.498122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.539550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:31.698975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:31.705276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.931756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.938437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.957484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.963904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:06:21.076128Z","caller":"traceutil/trace.go:172","msg":"trace[861018002] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"102.425338ms","start":"2025-12-13T13:06:20.973682Z","end":"2025-12-13T13:06:21.076108Z","steps":["trace[861018002] 'process raft request'  (duration: 102.310681ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:43.745943Z","caller":"traceutil/trace.go:172","msg":"trace[24304390] linearizableReadLoop","detail":"{readStateIndex:1228; appliedIndex:1228; }","duration":"135.01383ms","start":"2025-12-13T13:06:43.610904Z","end":"2025-12-13T13:06:43.745918Z","steps":["trace[24304390] 'read index received'  (duration: 135.006541ms)","trace[24304390] 'applied index is now lower than readState.Index'  (duration: 5.687µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:06:43.746152Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.214295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:06:43.746201Z","caller":"traceutil/trace.go:172","msg":"trace[1387126825] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1195; }","duration":"135.299813ms","start":"2025-12-13T13:06:43.610892Z","end":"2025-12-13T13:06:43.746192Z","steps":["trace[1387126825] 'agreement among raft nodes before linearized reading'  (duration: 135.165064ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:43.746195Z","caller":"traceutil/trace.go:172","msg":"trace[1898574573] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"195.681987ms","start":"2025-12-13T13:06:43.550496Z","end":"2025-12-13T13:06:43.746178Z","steps":["trace[1898574573] 'process raft request'  (duration: 195.528882ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:49.330072Z","caller":"traceutil/trace.go:172","msg":"trace[1495101274] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"131.841715ms","start":"2025-12-13T13:06:49.198213Z","end":"2025-12-13T13:06:49.330055Z","steps":["trace[1495101274] 'process raft request'  (duration: 131.807713ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:49.330120Z","caller":"traceutil/trace.go:172","msg":"trace[18349875] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"132.716688ms","start":"2025-12-13T13:06:49.197383Z","end":"2025-12-13T13:06:49.330099Z","steps":["trace[18349875] 'process raft request'  (duration: 132.559615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:06:49.517455Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.000204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:06:49.517524Z","caller":"traceutil/trace.go:172","msg":"trace[665132156] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1241; }","duration":"132.083768ms","start":"2025-12-13T13:06:49.385426Z","end":"2025-12-13T13:06:49.517510Z","steps":["trace[665132156] 'range keys from in-memory index tree'  (duration: 131.911759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:06:49.517471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.361607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaims\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-13T13:06:49.517551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.12289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-12-13T13:06:49.517585Z","caller":"traceutil/trace.go:172","msg":"trace[637629767] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1241; }","duration":"132.15912ms","start":"2025-12-13T13:06:49.385416Z","end":"2025-12-13T13:06:49.517575Z","steps":["trace[637629767] 'range keys from in-memory index tree'  (duration: 131.979698ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:49.517562Z","caller":"traceutil/trace.go:172","msg":"trace[364224605] range","detail":"{range_begin:/registry/resourceclaims; range_end:; response_count:0; response_revision:1241; }","duration":"123.454512ms","start":"2025-12-13T13:06:49.394094Z","end":"2025-12-13T13:06:49.517548Z","steps":["trace[364224605] 'range keys from in-memory index tree'  (duration: 123.306259ms)"],"step_count":1}
	
	
	==> gcp-auth [500e4d8d6d926896ae927d438c08663bc74a1f62f6f004a9ccf8e29479bc4463] <==
	2025/12/13 13:06:44 GCP Auth Webhook started!
	2025/12/13 13:06:49 Ready to marshal response ...
	2025/12/13 13:06:49 Ready to write response ...
	2025/12/13 13:06:51 Ready to marshal response ...
	2025/12/13 13:06:51 Ready to write response ...
	2025/12/13 13:06:51 Ready to marshal response ...
	2025/12/13 13:06:51 Ready to write response ...
	2025/12/13 13:07:04 Ready to marshal response ...
	2025/12/13 13:07:04 Ready to write response ...
	2025/12/13 13:07:09 Ready to marshal response ...
	2025/12/13 13:07:09 Ready to write response ...
	2025/12/13 13:07:17 Ready to marshal response ...
	2025/12/13 13:07:17 Ready to write response ...
	2025/12/13 13:07:17 Ready to marshal response ...
	2025/12/13 13:07:17 Ready to write response ...
	2025/12/13 13:07:22 Ready to marshal response ...
	2025/12/13 13:07:22 Ready to write response ...
	2025/12/13 13:07:27 Ready to marshal response ...
	2025/12/13 13:07:27 Ready to write response ...
	2025/12/13 13:07:46 Ready to marshal response ...
	2025/12/13 13:07:46 Ready to write response ...
	2025/12/13 13:09:31 Ready to marshal response ...
	2025/12/13 13:09:31 Ready to write response ...
	
	
	==> kernel <==
	 13:09:33 up  1:52,  0 user,  load average: 0.38, 0.95, 1.30
	Linux addons-802674 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523] <==
	I1213 13:07:29.968911       1 main.go:301] handling current node
	I1213 13:07:39.973912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:07:39.973951       1 main.go:301] handling current node
	I1213 13:07:49.970917       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:07:49.970945       1 main.go:301] handling current node
	I1213 13:07:59.971500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:07:59.971534       1 main.go:301] handling current node
	I1213 13:08:09.971905       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:08:09.971961       1 main.go:301] handling current node
	I1213 13:08:19.970880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:08:19.970909       1 main.go:301] handling current node
	I1213 13:08:29.978007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:08:29.978036       1 main.go:301] handling current node
	I1213 13:08:39.970868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:08:39.970898       1 main.go:301] handling current node
	I1213 13:08:49.969885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:08:49.969927       1 main.go:301] handling current node
	I1213 13:08:59.976687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:08:59.976722       1 main.go:301] handling current node
	I1213 13:09:09.970467       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:09:09.970523       1 main.go:301] handling current node
	I1213 13:09:19.977620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:09:19.977652       1 main.go:301] handling current node
	I1213 13:09:29.968797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:09:29.968870       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e] <==
	W1213 13:05:57.963938       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1213 13:06:10.167344       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.167402       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:10.167453       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.167513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:10.184812       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.184945       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:10.186067       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.186107       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	E1213 13:06:27.736403       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:27.736467       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 13:06:27.736859       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 13:06:27.737182       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	E1213 13:06:27.741878       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	E1213 13:06:27.762512       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	I1213 13:06:27.829279       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 13:06:58.668627       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56160: use of closed network connection
	E1213 13:06:58.817551       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56192: use of closed network connection
	I1213 13:07:04.617348       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 13:07:04.796760       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.188.213"}
	I1213 13:07:29.425768       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 13:09:31.486698       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.243.122"}
	
	
	==> kube-controller-manager [9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2] <==
	I1213 13:05:27.917286       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:05:27.917302       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:05:27.917433       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 13:05:27.917747       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:05:27.918688       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:05:27.919350       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 13:05:27.921662       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:05:27.921758       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:05:27.922976       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:05:27.934323       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 13:05:27.934368       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 13:05:27.934390       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 13:05:27.934401       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 13:05:27.934406       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 13:05:27.935801       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:05:27.940026       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-802674" podCIDRs=["10.244.0.0/24"]
	E1213 13:05:30.432050       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 13:05:57.926263       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 13:05:57.926403       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 13:05:57.926463       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 13:05:57.949166       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 13:05:57.952654       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 13:05:58.027281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:05:58.053635       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:06:12.858178       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6] <==
	I1213 13:05:29.517294       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:05:29.626873       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:05:29.728800       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:05:29.728849       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:05:29.728990       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:05:30.101546       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:05:30.101628       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:05:30.159360       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:05:30.208731       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:05:30.209074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:05:30.212113       1 config.go:200] "Starting service config controller"
	I1213 13:05:30.212395       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:05:30.212454       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:05:30.212554       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:05:30.212593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:05:30.213251       1 config.go:309] "Starting node config controller"
	I1213 13:05:30.213302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:05:30.213329       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:05:30.212200       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:05:30.319286       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:05:30.319320       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:05:30.319284       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069] <==
	E1213 13:05:20.927492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:05:20.927569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:05:20.927582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:05:20.927607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:05:20.927680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:05:20.927696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:05:20.927728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:05:20.927618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:05:20.927837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:05:20.927908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:05:20.927963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:05:20.927979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:05:21.730875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:05:21.735023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:05:21.742119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:05:21.868457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:05:21.908573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:05:21.932404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:05:21.932436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:05:21.937696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:05:21.997737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:05:22.013674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:05:22.026683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:05:22.119887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 13:05:23.725423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.857500    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b7d05923-d824-11f0-838d-fa3657ec9988\") pod \"a1eed49a-2c5a-45c9-9530-107a852e2c08\" (UID: \"a1eed49a-2c5a-45c9-9530-107a852e2c08\") "
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.857552    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a1eed49a-2c5a-45c9-9530-107a852e2c08-gcp-creds\") pod \"a1eed49a-2c5a-45c9-9530-107a852e2c08\" (UID: \"a1eed49a-2c5a-45c9-9530-107a852e2c08\") "
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.857579    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvknc\" (UniqueName: \"kubernetes.io/projected/a1eed49a-2c5a-45c9-9530-107a852e2c08-kube-api-access-gvknc\") pod \"a1eed49a-2c5a-45c9-9530-107a852e2c08\" (UID: \"a1eed49a-2c5a-45c9-9530-107a852e2c08\") "
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.857722    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eed49a-2c5a-45c9-9530-107a852e2c08-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a1eed49a-2c5a-45c9-9530-107a852e2c08" (UID: "a1eed49a-2c5a-45c9-9530-107a852e2c08"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.860125    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eed49a-2c5a-45c9-9530-107a852e2c08-kube-api-access-gvknc" (OuterVolumeSpecName: "kube-api-access-gvknc") pod "a1eed49a-2c5a-45c9-9530-107a852e2c08" (UID: "a1eed49a-2c5a-45c9-9530-107a852e2c08"). InnerVolumeSpecName "kube-api-access-gvknc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.861172    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^b7d05923-d824-11f0-838d-fa3657ec9988" (OuterVolumeSpecName: "task-pv-storage") pod "a1eed49a-2c5a-45c9-9530-107a852e2c08" (UID: "a1eed49a-2c5a-45c9-9530-107a852e2c08"). InnerVolumeSpecName "pvc-e714ed7c-0a75-4f4e-8eda-3d7fe5919b14". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.958799    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvknc\" (UniqueName: \"kubernetes.io/projected/a1eed49a-2c5a-45c9-9530-107a852e2c08-kube-api-access-gvknc\") on node \"addons-802674\" DevicePath \"\""
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.958855    1298 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-e714ed7c-0a75-4f4e-8eda-3d7fe5919b14\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b7d05923-d824-11f0-838d-fa3657ec9988\") on node \"addons-802674\" "
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.958867    1298 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a1eed49a-2c5a-45c9-9530-107a852e2c08-gcp-creds\") on node \"addons-802674\" DevicePath \"\""
	Dec 13 13:07:52 addons-802674 kubelet[1298]: I1213 13:07:52.963063    1298 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-e714ed7c-0a75-4f4e-8eda-3d7fe5919b14" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^b7d05923-d824-11f0-838d-fa3657ec9988") on node "addons-802674"
	Dec 13 13:07:53 addons-802674 kubelet[1298]: I1213 13:07:53.059468    1298 reconciler_common.go:299] "Volume detached for volume \"pvc-e714ed7c-0a75-4f4e-8eda-3d7fe5919b14\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b7d05923-d824-11f0-838d-fa3657ec9988\") on node \"addons-802674\" DevicePath \"\""
	Dec 13 13:07:53 addons-802674 kubelet[1298]: I1213 13:07:53.109425    1298 scope.go:117] "RemoveContainer" containerID="dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba"
	Dec 13 13:07:53 addons-802674 kubelet[1298]: I1213 13:07:53.120547    1298 scope.go:117] "RemoveContainer" containerID="dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba"
	Dec 13 13:07:53 addons-802674 kubelet[1298]: E1213 13:07:53.120998    1298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba\": container with ID starting with dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba not found: ID does not exist" containerID="dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba"
	Dec 13 13:07:53 addons-802674 kubelet[1298]: I1213 13:07:53.121041    1298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba"} err="failed to get container status \"dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba\": rpc error: code = NotFound desc = could not find container \"dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba\": container with ID starting with dc7ac20b6153958a404337da8a1542e7a28b02511cf646e4612244bd4d97baba not found: ID does not exist"
	Dec 13 13:07:53 addons-802674 kubelet[1298]: I1213 13:07:53.490559    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1eed49a-2c5a-45c9-9530-107a852e2c08" path="/var/lib/kubelet/pods/a1eed49a-2c5a-45c9-9530-107a852e2c08/volumes"
	Dec 13 13:07:54 addons-802674 kubelet[1298]: I1213 13:07:54.487563    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q4bmk" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:08:03 addons-802674 kubelet[1298]: I1213 13:08:03.490342    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jrjdp" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:08:23 addons-802674 kubelet[1298]: I1213 13:08:23.521973    1298 scope.go:117] "RemoveContainer" containerID="e9677bb66ac24656afe0c5ef928902d1f8d14d50bc658fcf8d385cd52c11bac3"
	Dec 13 13:08:23 addons-802674 kubelet[1298]: I1213 13:08:23.530912    1298 scope.go:117] "RemoveContainer" containerID="04e4470d52d4a96d1f4a9cb825d7072366eaeded8e788d10e2ee1060954b0801"
	Dec 13 13:08:51 addons-802674 kubelet[1298]: I1213 13:08:51.487701    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bldsd" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:09:12 addons-802674 kubelet[1298]: I1213 13:09:12.487184    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jrjdp" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:09:14 addons-802674 kubelet[1298]: I1213 13:09:14.487433    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q4bmk" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:09:31 addons-802674 kubelet[1298]: I1213 13:09:31.479962    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c271d086-094c-4fde-ac55-fc63cff492ce-gcp-creds\") pod \"hello-world-app-5d498dc89-cnjm2\" (UID: \"c271d086-094c-4fde-ac55-fc63cff492ce\") " pod="default/hello-world-app-5d498dc89-cnjm2"
	Dec 13 13:09:31 addons-802674 kubelet[1298]: I1213 13:09:31.480119    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjvc\" (UniqueName: \"kubernetes.io/projected/c271d086-094c-4fde-ac55-fc63cff492ce-kube-api-access-7xjvc\") pod \"hello-world-app-5d498dc89-cnjm2\" (UID: \"c271d086-094c-4fde-ac55-fc63cff492ce\") " pod="default/hello-world-app-5d498dc89-cnjm2"
	
	
	==> storage-provisioner [810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724] <==
	W1213 13:09:07.527832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:09.530856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:09.534552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:11.538139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:11.542767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:13.546068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:13.550398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:15.553298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:15.556799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:17.559417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:17.563353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:19.566088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:19.569754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:21.572918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:21.576425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:23.579799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:23.584344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:25.587033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:25.591700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:27.595181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:27.598924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:29.601687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:29.606514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:31.615569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:09:31.622769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-802674 -n addons-802674
helpers_test.go:270: (dbg) Run:  kubectl --context addons-802674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-802674 describe pod ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-802674 describe pod ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6: exit status 1 (58.665747ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4vxk5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kh6b6" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-802674 describe pod ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (247.253603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:09:33.861245  410287 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:09:33.861544  410287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:09:33.861555  410287 out.go:374] Setting ErrFile to fd 2...
	I1213 13:09:33.861560  410287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:09:33.861809  410287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:09:33.862425  410287 mustload.go:66] Loading cluster: addons-802674
	I1213 13:09:33.863437  410287 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:09:33.863464  410287 addons.go:622] checking whether the cluster is paused
	I1213 13:09:33.863584  410287 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:09:33.863600  410287 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:09:33.864017  410287 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:09:33.881898  410287 ssh_runner.go:195] Run: systemctl --version
	I1213 13:09:33.881957  410287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:09:33.899175  410287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:09:33.993520  410287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:09:33.993605  410287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:09:34.025403  410287 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:09:34.025425  410287 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:09:34.025429  410287 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:09:34.025432  410287 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:09:34.025434  410287 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:09:34.025439  410287 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:09:34.025442  410287 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:09:34.025444  410287 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:09:34.025447  410287 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:09:34.025455  410287 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:09:34.025458  410287 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:09:34.025460  410287 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:09:34.025465  410287 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:09:34.025469  410287 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:09:34.025472  410287 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:09:34.025477  410287 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:09:34.025481  410287 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:09:34.025485  410287 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:09:34.025488  410287 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:09:34.025490  410287 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:09:34.025493  410287 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:09:34.025504  410287 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:09:34.025509  410287 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:09:34.025516  410287 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:09:34.025521  410287 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:09:34.025524  410287 cri.go:89] found id: ""
	I1213 13:09:34.025562  410287 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:09:34.039897  410287 out.go:203] 
	W1213 13:09:34.040956  410287 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:09:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:09:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:09:34.040973  410287 out.go:285] * 
	* 
	W1213 13:09:34.045121  410287 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:09:34.046366  410287 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable ingress --alsologtostderr -v=1: exit status 11 (239.733053ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:09:34.106805  410365 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:09:34.107037  410365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:09:34.107046  410365 out.go:374] Setting ErrFile to fd 2...
	I1213 13:09:34.107050  410365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:09:34.107262  410365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:09:34.107494  410365 mustload.go:66] Loading cluster: addons-802674
	I1213 13:09:34.107862  410365 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:09:34.107889  410365 addons.go:622] checking whether the cluster is paused
	I1213 13:09:34.107979  410365 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:09:34.107992  410365 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:09:34.108332  410365 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:09:34.126848  410365 ssh_runner.go:195] Run: systemctl --version
	I1213 13:09:34.126905  410365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:09:34.143882  410365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:09:34.237825  410365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:09:34.237905  410365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:09:34.266428  410365 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:09:34.266458  410365 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:09:34.266463  410365 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:09:34.266467  410365 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:09:34.266472  410365 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:09:34.266476  410365 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:09:34.266480  410365 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:09:34.266485  410365 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:09:34.266490  410365 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:09:34.266497  410365 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:09:34.266503  410365 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:09:34.266508  410365 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:09:34.266513  410365 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:09:34.266518  410365 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:09:34.266524  410365 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:09:34.266536  410365 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:09:34.266544  410365 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:09:34.266550  410365 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:09:34.266555  410365 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:09:34.266558  410365 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:09:34.266562  410365 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:09:34.266566  410365 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:09:34.266570  410365 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:09:34.266574  410365 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:09:34.266593  410365 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:09:34.266598  410365 cri.go:89] found id: ""
	I1213 13:09:34.266646  410365 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:09:34.280083  410365 out.go:203] 
	W1213 13:09:34.281519  410365 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:09:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:09:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:09:34.281605  410365 out.go:285] * 
	* 
	W1213 13:09:34.285647  410365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:09:34.286935  410365 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (149.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-2rht9" [71ccddf9-9e49-433d-a7b7-7f0dde7bbd97] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003484429s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (278.827731ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:06.708716  405532 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:06.709027  405532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:06.709038  405532 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:06.709045  405532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:06.709415  405532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:06.709762  405532 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:06.710265  405532 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:06.710293  405532 addons.go:622] checking whether the cluster is paused
	I1213 13:07:06.710425  405532 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:06.710441  405532 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:06.711111  405532 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:06.731977  405532 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:06.732047  405532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:06.753343  405532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:06.857289  405532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:06.857368  405532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:06.891347  405532 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:06.891379  405532 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:06.891385  405532 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:06.891390  405532 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:06.891394  405532 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:06.891400  405532 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:06.891405  405532 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:06.891409  405532 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:06.891413  405532 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:06.891421  405532 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:06.891426  405532 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:06.891430  405532 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:06.891452  405532 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:06.891465  405532 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:06.891470  405532 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:06.891478  405532 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:06.891483  405532 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:06.891490  405532 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:06.891495  405532 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:06.891499  405532 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:06.891503  405532 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:06.891507  405532 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:06.891512  405532 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:06.891517  405532 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:06.891521  405532 cri.go:89] found id: ""
	I1213 13:07:06.891576  405532 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:06.909550  405532 out.go:203] 
	W1213 13:07:06.910941  405532 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:06.910968  405532 out.go:285] * 
	* 
	W1213 13:07:06.917589  405532 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:06.919485  405532 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.303215ms
I1213 13:06:59.072174  394130 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 13:06:59.072193  394130 kapi.go:107] duration metric: took 3.272328ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003506594s
addons_test.go:465: (dbg) Run:  kubectl --context addons-802674 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (246.552773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:04.188944  404945 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:04.189059  404945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:04.189068  404945 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:04.189073  404945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:04.189271  404945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:04.189517  404945 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:04.189864  404945 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:04.189890  404945 addons.go:622] checking whether the cluster is paused
	I1213 13:07:04.189983  404945 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:04.189997  404945 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:04.190382  404945 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:04.208968  404945 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:04.209021  404945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:04.228676  404945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:04.325096  404945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:04.325206  404945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:04.353494  404945 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:04.353517  404945 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:04.353523  404945 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:04.353528  404945 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:04.353533  404945 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:04.353538  404945 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:04.353543  404945 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:04.353548  404945 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:04.353552  404945 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:04.353572  404945 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:04.353580  404945 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:04.353584  404945 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:04.353587  404945 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:04.353590  404945 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:04.353593  404945 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:04.353600  404945 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:04.353606  404945 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:04.353610  404945 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:04.353613  404945 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:04.353616  404945 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:04.353621  404945 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:04.353624  404945 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:04.353626  404945 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:04.353628  404945 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:04.353631  404945 cri.go:89] found id: ""
	I1213 13:07:04.353672  404945 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:04.367543  404945 out.go:203] 
	W1213 13:07:04.368619  404945 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:04.368639  404945 out.go:285] * 
	* 
	W1213 13:07:04.372604  404945 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:04.373949  404945 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 13:06:59.068938  394130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.281916ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-802674 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-802674 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [4f5014bc-52c7-4135-8ca2-810cb68d454d] Pending
helpers_test.go:353: "task-pv-pod" [4f5014bc-52c7-4135-8ca2-810cb68d454d] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00333237s
addons_test.go:574: (dbg) Run:  kubectl --context addons-802674 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-802674 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-802674 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-802674 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-802674 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-802674 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-802674 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [a1eed49a-2c5a-45c9-9530-107a852e2c08] Pending
helpers_test.go:353: "task-pv-pod-restore" [a1eed49a-2c5a-45c9-9530-107a852e2c08] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003755926s
addons_test.go:616: (dbg) Run:  kubectl --context addons-802674 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-802674 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-802674 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (249.309306ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:53.511670  408175 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:53.511924  408175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:53.511933  408175 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:53.511937  408175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:53.512111  408175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:53.512407  408175 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:53.512730  408175 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:53.512751  408175 addons.go:622] checking whether the cluster is paused
	I1213 13:07:53.512846  408175 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:53.512859  408175 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:53.513311  408175 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:53.534884  408175 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:53.534954  408175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:53.552813  408175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:53.647483  408175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:53.647596  408175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:53.676962  408175 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:07:53.676986  408175 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:53.676992  408175 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:53.676998  408175 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:53.677003  408175 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:53.677010  408175 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:53.677014  408175 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:53.677019  408175 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:53.677024  408175 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:53.677039  408175 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:53.677049  408175 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:53.677053  408175 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:53.677057  408175 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:53.677061  408175 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:53.677065  408175 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:53.677072  408175 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:53.677076  408175 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:53.677082  408175 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:53.677086  408175 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:53.677091  408175 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:53.677095  408175 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:53.677099  408175 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:53.677105  408175 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:53.677110  408175 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:53.677115  408175 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:53.677121  408175 cri.go:89] found id: ""
	I1213 13:07:53.677177  408175 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:53.691557  408175 out.go:203] 
	W1213 13:07:53.692766  408175 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:53.692792  408175 out.go:285] * 
	* 
	W1213 13:07:53.696660  408175 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:53.698145  408175 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (244.916617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:53.758208  408253 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:53.758348  408253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:53.758360  408253 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:53.758364  408253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:53.758560  408253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:53.758818  408253 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:53.759131  408253 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:53.759151  408253 addons.go:622] checking whether the cluster is paused
	I1213 13:07:53.759231  408253 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:53.759243  408253 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:53.759611  408253 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:53.778252  408253 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:53.778311  408253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:53.796896  408253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:53.892632  408253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:53.892701  408253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:53.921455  408253 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:07:53.921476  408253 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:53.921483  408253 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:53.921488  408253 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:53.921492  408253 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:53.921496  408253 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:53.921500  408253 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:53.921504  408253 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:53.921508  408253 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:53.921516  408253 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:53.921520  408253 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:53.921525  408253 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:53.921532  408253 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:53.921541  408253 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:53.921546  408253 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:53.921556  408253 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:53.921564  408253 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:53.921569  408253 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:53.921573  408253 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:53.921578  408253 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:53.921588  408253 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:53.921592  408253 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:53.921596  408253 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:53.921601  408253 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:53.921611  408253 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:53.921615  408253 cri.go:89] found id: ""
	I1213 13:07:53.921661  408253 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:53.936174  408253 out.go:203] 
	W1213 13:07:53.937587  408253 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:53.937609  408253 out.go:285] * 
	* 
	W1213 13:07:53.941737  408253 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:53.943240  408253 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (54.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-802674 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-802674 --alsologtostderr -v=1: exit status 11 (252.161413ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:06:59.127738  404057 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:06:59.127856  404057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:06:59.127864  404057 out.go:374] Setting ErrFile to fd 2...
	I1213 13:06:59.127868  404057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:06:59.128074  404057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:06:59.128326  404057 mustload.go:66] Loading cluster: addons-802674
	I1213 13:06:59.128680  404057 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:59.128716  404057 addons.go:622] checking whether the cluster is paused
	I1213 13:06:59.128866  404057 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:06:59.128880  404057 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:06:59.129544  404057 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:06:59.149589  404057 ssh_runner.go:195] Run: systemctl --version
	I1213 13:06:59.149646  404057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:06:59.168384  404057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:06:59.264666  404057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:06:59.264768  404057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:06:59.292765  404057 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:06:59.292802  404057 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:06:59.292808  404057 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:06:59.292812  404057 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:06:59.292815  404057 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:06:59.292826  404057 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:06:59.292829  404057 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:06:59.292832  404057 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:06:59.292835  404057 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:06:59.292841  404057 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:06:59.292843  404057 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:06:59.292846  404057 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:06:59.292849  404057 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:06:59.292852  404057 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:06:59.292855  404057 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:06:59.292860  404057 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:06:59.292862  404057 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:06:59.292867  404057 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:06:59.292870  404057 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:06:59.292873  404057 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:06:59.292878  404057 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:06:59.292881  404057 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:06:59.292884  404057 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:06:59.292886  404057 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:06:59.292889  404057 cri.go:89] found id: ""
	I1213 13:06:59.292928  404057 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:06:59.307020  404057 out.go:203] 
	W1213 13:06:59.308423  404057 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:06:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:06:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:06:59.308443  404057 out.go:285] * 
	* 
	W1213 13:06:59.312466  404057 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:06:59.314031  404057 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-802674 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-802674
helpers_test.go:244: (dbg) docker inspect addons-802674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd",
	        "Created": "2025-12-13T13:05:07.436979754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 396538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:05:07.468333702Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/hosts",
	        "LogPath": "/var/lib/docker/containers/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd/270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd-json.log",
	        "Name": "/addons-802674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-802674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-802674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "270e64e091ea2f346242d00053ae9930f5b0ab1d4a8898bb83dfab5c4d9327dd",
	                "LowerDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa929da5763204b22b7e604ac815e80f96b30dfe5cd1593cf34830d30d7d00f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-802674",
	                "Source": "/var/lib/docker/volumes/addons-802674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-802674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-802674",
	                "name.minikube.sigs.k8s.io": "addons-802674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9b7891cc6378426857df09cba08a56ab9633cf1ab32151364aff4fdc3cf11f57",
	            "SandboxKey": "/var/run/docker/netns/9b7891cc6378",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-802674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "761dfdd70f6193271784b82d834b359f6576b740ac6d713183b9e21d7d14e9a1",
	                    "EndpointID": "757048b00cb230b2b4d7c6bdcb1de7c61e1295333cd75069a97fae99a1d19210",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "32:57:aa:2d:c9:90",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-802674",
	                        "270e64e091ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-802674 -n addons-802674
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-802674 logs -n 25: (1.171343784s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-292122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-292122   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-292122                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-292122   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-703172 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-703172   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-703172                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-703172   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-519964 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-519964   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-519964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-519964   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-292122                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-292122   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-703172                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-703172   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-519964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-519964   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ --download-only -p download-docker-752677 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-752677 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ -p download-docker-752677                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-752677 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ --download-only -p binary-mirror-589486 --alsologtostderr --binary-mirror http://127.0.0.1:44049 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-589486   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ -p binary-mirror-589486                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-589486   │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ addons  │ disable dashboard -p addons-802674                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-802674          │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ addons  │ enable dashboard -p addons-802674                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-802674          │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ start   │ -p addons-802674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-802674          │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:06 UTC │
	│ addons  │ addons-802674 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-802674          │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ addons  │ addons-802674 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-802674          │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	│ addons  │ enable headlamp -p addons-802674 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-802674          │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:44.951695  395903 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:44.951975  395903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:44.951988  395903 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:44.951992  395903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:44.952172  395903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:04:44.952648  395903 out.go:368] Setting JSON to false
	I1213 13:04:44.953550  395903 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6433,"bootTime":1765624652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:44.953602  395903 start.go:143] virtualization: kvm guest
	I1213 13:04:44.955322  395903 out.go:179] * [addons-802674] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:44.956540  395903 notify.go:221] Checking for updates...
	I1213 13:04:44.956566  395903 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:04:44.957828  395903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:44.959074  395903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:04:44.960190  395903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:04:44.961216  395903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:04:44.962302  395903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:04:44.963872  395903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:44.986753  395903 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:44.986866  395903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:45.043388  395903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:04:45.03440041 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:45.043497  395903 docker.go:319] overlay module found
	I1213 13:04:45.045051  395903 out.go:179] * Using the docker driver based on user configuration
	I1213 13:04:45.046015  395903 start.go:309] selected driver: docker
	I1213 13:04:45.046034  395903 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:45.046051  395903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:04:45.046671  395903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:45.098882  395903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 13:04:45.089450922 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:45.099045  395903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:45.099250  395903 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:04:45.100737  395903 out.go:179] * Using Docker driver with root privileges
	I1213 13:04:45.101823  395903 cni.go:84] Creating CNI manager for ""
	I1213 13:04:45.101905  395903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:04:45.101920  395903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:04:45.102004  395903 start.go:353] cluster config:
	{Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 13:04:45.103353  395903 out.go:179] * Starting "addons-802674" primary control-plane node in "addons-802674" cluster
	I1213 13:04:45.104449  395903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:04:45.105447  395903 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:04:45.106611  395903 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:04:45.106648  395903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:04:45.106677  395903 cache.go:65] Caching tarball of preloaded images
	I1213 13:04:45.106732  395903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:04:45.106896  395903 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:04:45.106919  395903 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:04:45.107282  395903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/config.json ...
	I1213 13:04:45.107309  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/config.json: {Name:mkecf58a651585115de101f1a06b6b9ad5bfd689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:04:45.122846  395903 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 13:04:45.122959  395903 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 13:04:45.122974  395903 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 13:04:45.122978  395903 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 13:04:45.122988  395903 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 13:04:45.122994  395903 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 13:04:56.836949  395903 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 13:04:56.836991  395903 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:04:56.837037  395903 start.go:360] acquireMachinesLock for addons-802674: {Name:mk0ce315a4c9f97eec976638407460e431021c73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:04:56.837137  395903 start.go:364] duration metric: took 77.132µs to acquireMachinesLock for "addons-802674"
	I1213 13:04:56.837159  395903 start.go:93] Provisioning new machine with config: &{Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:04:56.837234  395903 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:04:56.838884  395903 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 13:04:56.839120  395903 start.go:159] libmachine.API.Create for "addons-802674" (driver="docker")
	I1213 13:04:56.839157  395903 client.go:173] LocalClient.Create starting
	I1213 13:04:56.839252  395903 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:04:56.888168  395903 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:04:56.941645  395903 cli_runner.go:164] Run: docker network inspect addons-802674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:04:56.958586  395903 cli_runner.go:211] docker network inspect addons-802674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:04:56.958671  395903 network_create.go:284] running [docker network inspect addons-802674] to gather additional debugging logs...
	I1213 13:04:56.958688  395903 cli_runner.go:164] Run: docker network inspect addons-802674
	W1213 13:04:56.974960  395903 cli_runner.go:211] docker network inspect addons-802674 returned with exit code 1
	I1213 13:04:56.974994  395903 network_create.go:287] error running [docker network inspect addons-802674]: docker network inspect addons-802674: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-802674 not found
	I1213 13:04:56.975008  395903 network_create.go:289] output of [docker network inspect addons-802674]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-802674 not found
	
	** /stderr **
	I1213 13:04:56.975097  395903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:04:56.990971  395903 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7c410}
	I1213 13:04:56.991012  395903 network_create.go:124] attempt to create docker network addons-802674 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 13:04:56.991058  395903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-802674 addons-802674
	I1213 13:04:57.242026  395903 network_create.go:108] docker network addons-802674 192.168.49.0/24 created
	I1213 13:04:57.242070  395903 kic.go:121] calculated static IP "192.168.49.2" for the "addons-802674" container
	I1213 13:04:57.242140  395903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:04:57.258327  395903 cli_runner.go:164] Run: docker volume create addons-802674 --label name.minikube.sigs.k8s.io=addons-802674 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:04:57.310915  395903 oci.go:103] Successfully created a docker volume addons-802674
	I1213 13:04:57.310993  395903 cli_runner.go:164] Run: docker run --rm --name addons-802674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-802674 --entrypoint /usr/bin/test -v addons-802674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:05:03.623744  395903 cli_runner.go:217] Completed: docker run --rm --name addons-802674-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-802674 --entrypoint /usr/bin/test -v addons-802674:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (6.312704028s)
	I1213 13:05:03.623798  395903 oci.go:107] Successfully prepared a docker volume addons-802674
	I1213 13:05:03.623901  395903 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:05:03.623920  395903 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:05:03.623999  395903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-802674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:05:07.365297  395903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-802674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.741234248s)
	I1213 13:05:07.365334  395903 kic.go:203] duration metric: took 3.741411187s to extract preloaded images to volume ...
	W1213 13:05:07.365440  395903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:05:07.365480  395903 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:05:07.365529  395903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:05:07.421238  395903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-802674 --name addons-802674 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-802674 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-802674 --network addons-802674 --ip 192.168.49.2 --volume addons-802674:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:05:07.685223  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Running}}
	I1213 13:05:07.703335  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:07.719634  395903 cli_runner.go:164] Run: docker exec addons-802674 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:05:07.766098  395903 oci.go:144] the created container "addons-802674" has a running status.
	I1213 13:05:07.766142  395903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa...
	I1213 13:05:07.814959  395903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:05:07.844868  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:07.862673  395903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:05:07.862692  395903 kic_runner.go:114] Args: [docker exec --privileged addons-802674 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:05:07.902890  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:07.923315  395903 machine.go:94] provisionDockerMachine start ...
	I1213 13:05:07.923415  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:07.945278  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:07.945638  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:07.945661  395903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:05:07.946339  395903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56446->127.0.0.1:33143: read: connection reset by peer
	I1213 13:05:11.076393  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-802674
	
	I1213 13:05:11.076421  395903 ubuntu.go:182] provisioning hostname "addons-802674"
	I1213 13:05:11.076486  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.094349  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:11.094607  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:11.094629  395903 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-802674 && echo "addons-802674" | sudo tee /etc/hostname
	I1213 13:05:11.234125  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-802674
	
	I1213 13:05:11.234211  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.251529  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:11.251806  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:11.251836  395903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-802674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-802674/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-802674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:05:11.383057  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:05:11.383107  395903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:05:11.383145  395903 ubuntu.go:190] setting up certificates
	I1213 13:05:11.383166  395903 provision.go:84] configureAuth start
	I1213 13:05:11.383231  395903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-802674
	I1213 13:05:11.400319  395903 provision.go:143] copyHostCerts
	I1213 13:05:11.400418  395903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:05:11.400534  395903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:05:11.400608  395903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:05:11.400662  395903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.addons-802674 san=[127.0.0.1 192.168.49.2 addons-802674 localhost minikube]
	I1213 13:05:11.447356  395903 provision.go:177] copyRemoteCerts
	I1213 13:05:11.447414  395903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:05:11.447449  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.465388  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:11.560753  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:05:11.578971  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 13:05:11.595573  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:05:11.612306  395903 provision.go:87] duration metric: took 229.11797ms to configureAuth
	I1213 13:05:11.612327  395903 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:05:11.612493  395903 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:05:11.612610  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.629684  395903 main.go:143] libmachine: Using SSH client type: native
	I1213 13:05:11.629924  395903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 13:05:11.629940  395903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:05:11.897039  395903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:05:11.897065  395903 machine.go:97] duration metric: took 3.973724993s to provisionDockerMachine
	I1213 13:05:11.897077  395903 client.go:176] duration metric: took 15.057911494s to LocalClient.Create
	I1213 13:05:11.897097  395903 start.go:167] duration metric: took 15.057978862s to libmachine.API.Create "addons-802674"
	I1213 13:05:11.897105  395903 start.go:293] postStartSetup for "addons-802674" (driver="docker")
	I1213 13:05:11.897115  395903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:05:11.897172  395903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:05:11.897206  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:11.914876  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.011356  395903 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:05:12.014681  395903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:05:12.014708  395903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:05:12.014721  395903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:05:12.014809  395903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:05:12.014844  395903 start.go:296] duration metric: took 117.732854ms for postStartSetup
	I1213 13:05:12.015108  395903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-802674
	I1213 13:05:12.032887  395903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/config.json ...
	I1213 13:05:12.033184  395903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:05:12.033261  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:12.050942  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.142759  395903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:05:12.147068  395903 start.go:128] duration metric: took 15.309818955s to createHost
	I1213 13:05:12.147091  395903 start.go:83] releasing machines lock for "addons-802674", held for 15.309943372s
	I1213 13:05:12.147156  395903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-802674
	I1213 13:05:12.164286  395903 ssh_runner.go:195] Run: cat /version.json
	I1213 13:05:12.164340  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:12.164389  395903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:05:12.164472  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:12.182041  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.182434  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:12.327040  395903 ssh_runner.go:195] Run: systemctl --version
	I1213 13:05:12.333452  395903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:05:12.367221  395903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:05:12.371959  395903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:05:12.372033  395903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:05:12.397050  395903 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:05:12.397077  395903 start.go:496] detecting cgroup driver to use...
	I1213 13:05:12.397108  395903 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:05:12.397148  395903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:05:12.412619  395903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:05:12.424101  395903 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:05:12.424172  395903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:05:12.439536  395903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:05:12.455822  395903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:05:12.535319  395903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:05:12.621601  395903 docker.go:234] disabling docker service ...
	I1213 13:05:12.621672  395903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:05:12.640150  395903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:05:12.651700  395903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:05:12.731247  395903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:05:12.809540  395903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:05:12.821297  395903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:05:12.834593  395903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:05:12.834654  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.844419  395903 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:05:12.844476  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.852740  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.860641  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.868531  395903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:05:12.875825  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.883814  395903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.896299  395903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:05:12.904242  395903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:05:12.910917  395903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:05:12.918139  395903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:12.993561  395903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:05:13.126025  395903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:05:13.126104  395903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:05:13.129940  395903 start.go:564] Will wait 60s for crictl version
	I1213 13:05:13.129988  395903 ssh_runner.go:195] Run: which crictl
	I1213 13:05:13.133365  395903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:05:13.157882  395903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:05:13.157972  395903 ssh_runner.go:195] Run: crio --version
	I1213 13:05:13.184740  395903 ssh_runner.go:195] Run: crio --version
	I1213 13:05:13.212660  395903 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 13:05:13.213841  395903 cli_runner.go:164] Run: docker network inspect addons-802674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:05:13.230738  395903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 13:05:13.234604  395903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:05:13.244446  395903 kubeadm.go:884] updating cluster {Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:05:13.244565  395903 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:05:13.244607  395903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:05:13.273510  395903 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:05:13.273527  395903 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:05:13.273567  395903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:05:13.296142  395903 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:05:13.296164  395903 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:05:13.296173  395903 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 13:05:13.296260  395903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-802674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:05:13.296330  395903 ssh_runner.go:195] Run: crio config
	I1213 13:05:13.341122  395903 cni.go:84] Creating CNI manager for ""
	I1213 13:05:13.341147  395903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:05:13.341169  395903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:05:13.341193  395903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-802674 NodeName:addons-802674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:05:13.341327  395903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-802674"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:05:13.341388  395903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:05:13.349251  395903 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:05:13.349313  395903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:05:13.356669  395903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 13:05:13.368993  395903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:05:13.383278  395903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 13:05:13.394612  395903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:05:13.397887  395903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:05:13.407004  395903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:13.485308  395903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:05:13.510258  395903 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674 for IP: 192.168.49.2
	I1213 13:05:13.510279  395903 certs.go:195] generating shared ca certs ...
	I1213 13:05:13.510302  395903 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.510441  395903 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:05:13.585456  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt ...
	I1213 13:05:13.585484  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt: {Name:mkbe6268781c2593d1b2a5df3e1ac616a830a0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.585769  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key ...
	I1213 13:05:13.585809  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key: {Name:mked8d99a218a7be1585007abbfdeebc7c1923af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.585979  395903 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:05:13.765995  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt ...
	I1213 13:05:13.766029  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt: {Name:mk5044bf824cba2459cb0a754c1bb7c6e978d3e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.766234  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key ...
	I1213 13:05:13.766250  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key: {Name:mk628e8c4a55d459061863b7406789f36f4492a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.766358  395903 certs.go:257] generating profile certs ...
	I1213 13:05:13.766431  395903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.key
	I1213 13:05:13.766446  395903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt with IP's: []
	I1213 13:05:13.845037  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt ...
	I1213 13:05:13.845066  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: {Name:mk054cfd3343c256548cf41a3693281b626b8888 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.845251  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.key ...
	I1213 13:05:13.845266  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.key: {Name:mkc9cee120f1f6bd3a416b22a88c1d52218ccb68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.845370  395903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677
	I1213 13:05:13.845390  395903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 13:05:13.867580  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677 ...
	I1213 13:05:13.867602  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677: {Name:mk8fe8a5adab02b947191b431981eaaee59403fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.867739  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677 ...
	I1213 13:05:13.867760  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677: {Name:mk766c61ae7dcc9b47f772b9a771c9f092571a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.867870  395903 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt.c612d677 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt
	I1213 13:05:13.867969  395903 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key.c612d677 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key
	I1213 13:05:13.868033  395903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key
	I1213 13:05:13.868051  395903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt with IP's: []
	I1213 13:05:13.969028  395903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt ...
	I1213 13:05:13.969053  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt: {Name:mkd61be8248d1931b4aa61ed6cc43eb7679cae12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.969211  395903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key ...
	I1213 13:05:13.969226  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key: {Name:mk8fb6e2f1728c6f07435c4e1d84d8766afaf9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:13.969427  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:05:13.969464  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:05:13.969490  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:05:13.969512  395903 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:05:13.970093  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:05:13.988361  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:05:14.005215  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:05:14.022321  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:05:14.039158  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 13:05:14.055721  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:05:14.072631  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:05:14.088947  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:05:14.105435  395903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:05:14.123720  395903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:05:14.135803  395903 ssh_runner.go:195] Run: openssl version
	I1213 13:05:14.141604  395903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.148286  395903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:05:14.157148  395903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.160533  395903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.160597  395903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:05:14.193841  395903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:05:14.201761  395903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:05:14.208827  395903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:05:14.212191  395903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:05:14.212235  395903 kubeadm.go:401] StartCluster: {Name:addons-802674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-802674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:05:14.212316  395903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:05:14.212364  395903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:05:14.238801  395903 cri.go:89] found id: ""
	I1213 13:05:14.238849  395903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:05:14.246513  395903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:05:14.254489  395903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:05:14.254531  395903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:05:14.261552  395903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:05:14.261568  395903 kubeadm.go:158] found existing configuration files:
	
	I1213 13:05:14.261622  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:05:14.268607  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:05:14.268652  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:05:14.275449  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:05:14.282456  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:05:14.282504  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:05:14.289847  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:05:14.297418  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:05:14.297464  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:05:14.304246  395903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:05:14.311149  395903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:05:14.311192  395903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:05:14.318036  395903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:05:14.354521  395903 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 13:05:14.354597  395903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:05:14.388797  395903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:05:14.388882  395903 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:05:14.388925  395903 kubeadm.go:319] OS: Linux
	I1213 13:05:14.388984  395903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:05:14.389045  395903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:05:14.389106  395903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:05:14.389164  395903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:05:14.389269  395903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:05:14.389335  395903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:05:14.389390  395903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:05:14.389442  395903 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:05:14.449420  395903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:05:14.449546  395903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:05:14.449670  395903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:05:14.457463  395903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:05:14.459289  395903 out.go:252]   - Generating certificates and keys ...
	I1213 13:05:14.459400  395903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:05:14.459507  395903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:05:14.815230  395903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:05:15.014501  395903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:05:15.501546  395903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:05:16.158321  395903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:05:16.209825  395903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:05:16.209971  395903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-802674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:05:16.325388  395903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:05:16.325569  395903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-802674 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 13:05:16.879110  395903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:05:17.048064  395903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:05:17.227180  395903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:05:17.227253  395903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:05:17.329825  395903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:05:17.368318  395903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:05:17.655083  395903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:05:17.736743  395903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:05:18.218958  395903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:05:18.219426  395903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:05:18.222733  395903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:05:18.223938  395903 out.go:252]   - Booting up control plane ...
	I1213 13:05:18.224063  395903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:05:18.224177  395903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:05:18.224836  395903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:05:18.238023  395903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:05:18.238150  395903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:05:18.244127  395903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:05:18.244403  395903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:05:18.244439  395903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:05:18.345144  395903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:05:18.345286  395903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:05:19.346704  395903 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001722958s
	I1213 13:05:19.351102  395903 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:05:19.351228  395903 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 13:05:19.351340  395903 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:05:19.351451  395903 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:05:20.355486  395903 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004352532s
	I1213 13:05:20.929932  395903 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.578835844s
	I1213 13:05:22.853003  395903 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501895816s
	I1213 13:05:22.870210  395903 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:05:22.878708  395903 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:05:22.887074  395903 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:05:22.887375  395903 kubeadm.go:319] [mark-control-plane] Marking the node addons-802674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:05:22.894314  395903 kubeadm.go:319] [bootstrap-token] Using token: mcbcc2.gt01yxp6tdtgacjl
	I1213 13:05:22.895341  395903 out.go:252]   - Configuring RBAC rules ...
	I1213 13:05:22.895511  395903 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:05:22.898537  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:05:22.903956  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:05:22.905997  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:05:22.908065  395903 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:05:22.910298  395903 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:05:23.259057  395903 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:05:23.673978  395903 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:05:24.258481  395903 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:05:24.259342  395903 kubeadm.go:319] 
	I1213 13:05:24.259430  395903 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:05:24.259441  395903 kubeadm.go:319] 
	I1213 13:05:24.259525  395903 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:05:24.259536  395903 kubeadm.go:319] 
	I1213 13:05:24.259556  395903 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:05:24.259689  395903 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:05:24.259802  395903 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:05:24.259812  395903 kubeadm.go:319] 
	I1213 13:05:24.259895  395903 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:05:24.259905  395903 kubeadm.go:319] 
	I1213 13:05:24.259949  395903 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:05:24.259955  395903 kubeadm.go:319] 
	I1213 13:05:24.259998  395903 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:05:24.260112  395903 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:05:24.260169  395903 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:05:24.260196  395903 kubeadm.go:319] 
	I1213 13:05:24.260264  395903 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:05:24.260406  395903 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:05:24.260424  395903 kubeadm.go:319] 
	I1213 13:05:24.260540  395903 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mcbcc2.gt01yxp6tdtgacjl \
	I1213 13:05:24.260698  395903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 \
	I1213 13:05:24.260728  395903 kubeadm.go:319] 	--control-plane 
	I1213 13:05:24.260738  395903 kubeadm.go:319] 
	I1213 13:05:24.260880  395903 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:05:24.260892  395903 kubeadm.go:319] 
	I1213 13:05:24.261003  395903 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mcbcc2.gt01yxp6tdtgacjl \
	I1213 13:05:24.261169  395903 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 
	I1213 13:05:24.263318  395903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:05:24.263476  395903 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:05:24.263586  395903 cni.go:84] Creating CNI manager for ""
	I1213 13:05:24.263600  395903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:05:24.265018  395903 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 13:05:24.266116  395903 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 13:05:24.270351  395903 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 13:05:24.270368  395903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 13:05:24.282883  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 13:05:24.483344  395903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:05:24.483451  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:24.483451  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-802674 minikube.k8s.io/updated_at=2025_12_13T13_05_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-802674 minikube.k8s.io/primary=true
	I1213 13:05:24.492577  395903 ops.go:34] apiserver oom_adj: -16
	I1213 13:05:24.558156  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:25.058485  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:25.558964  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:26.059074  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:26.559199  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:27.058915  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:27.558294  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:28.058981  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:28.559092  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:29.058198  395903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:05:29.122736  395903 kubeadm.go:1114] duration metric: took 4.6393628s to wait for elevateKubeSystemPrivileges
	I1213 13:05:29.122786  395903 kubeadm.go:403] duration metric: took 14.910542769s to StartCluster
	I1213 13:05:29.122815  395903 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:29.122948  395903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:05:29.123341  395903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:05:29.123548  395903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:05:29.123563  395903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:05:29.123644  395903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 13:05:29.123765  395903 addons.go:70] Setting yakd=true in profile "addons-802674"
	I1213 13:05:29.123799  395903 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:05:29.123813  395903 addons.go:239] Setting addon yakd=true in "addons-802674"
	I1213 13:05:29.123816  395903 addons.go:70] Setting inspektor-gadget=true in profile "addons-802674"
	I1213 13:05:29.123849  395903 addons.go:70] Setting default-storageclass=true in profile "addons-802674"
	I1213 13:05:29.123851  395903 addons.go:239] Setting addon inspektor-gadget=true in "addons-802674"
	I1213 13:05:29.123857  395903 addons.go:70] Setting storage-provisioner=true in profile "addons-802674"
	I1213 13:05:29.123869  395903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-802674"
	I1213 13:05:29.123876  395903 addons.go:239] Setting addon storage-provisioner=true in "addons-802674"
	I1213 13:05:29.123897  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.123887  395903 addons.go:70] Setting cloud-spanner=true in profile "addons-802674"
	I1213 13:05:29.123911  395903 addons.go:70] Setting metrics-server=true in profile "addons-802674"
	I1213 13:05:29.123934  395903 addons.go:239] Setting addon metrics-server=true in "addons-802674"
	I1213 13:05:29.123945  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.123963  395903 addons.go:239] Setting addon cloud-spanner=true in "addons-802674"
	I1213 13:05:29.123984  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124022  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124197  395903 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-802674"
	I1213 13:05:29.124258  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124298  395903 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-802674"
	I1213 13:05:29.124335  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124468  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124498  395903 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-802674"
	I1213 13:05:29.124514  395903 addons.go:70] Setting volcano=true in profile "addons-802674"
	I1213 13:05:29.124516  395903 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-802674"
	I1213 13:05:29.124526  395903 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-802674"
	I1213 13:05:29.124535  395903 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-802674"
	I1213 13:05:29.124539  395903 addons.go:70] Setting volumesnapshots=true in profile "addons-802674"
	I1213 13:05:29.124549  395903 addons.go:239] Setting addon volumesnapshots=true in "addons-802674"
	I1213 13:05:29.124555  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124569  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124849  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124488  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124988  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.125066  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.125828  395903 out.go:179] * Verifying Kubernetes components...
	I1213 13:05:29.123850  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.126471  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124880  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124499  395903 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-802674"
	I1213 13:05:29.127522  395903 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-802674"
	I1213 13:05:29.127553  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.128080  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.128577  395903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:05:29.124490  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124529  395903 addons.go:239] Setting addon volcano=true in "addons-802674"
	I1213 13:05:29.128941  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.129412  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124496  395903 addons.go:70] Setting gcp-auth=true in profile "addons-802674"
	I1213 13:05:29.130008  395903 mustload.go:66] Loading cluster: addons-802674
	I1213 13:05:29.124490  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.124502  395903 addons.go:70] Setting ingress=true in profile "addons-802674"
	I1213 13:05:29.130911  395903 addons.go:239] Setting addon ingress=true in "addons-802674"
	I1213 13:05:29.130953  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124507  395903 addons.go:70] Setting ingress-dns=true in profile "addons-802674"
	I1213 13:05:29.134318  395903 addons.go:239] Setting addon ingress-dns=true in "addons-802674"
	I1213 13:05:29.134362  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.123846  395903 addons.go:70] Setting registry-creds=true in profile "addons-802674"
	I1213 13:05:29.134993  395903 addons.go:239] Setting addon registry-creds=true in "addons-802674"
	I1213 13:05:29.135024  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.124505  395903 addons.go:70] Setting registry=true in profile "addons-802674"
	I1213 13:05:29.135167  395903 addons.go:239] Setting addon registry=true in "addons-802674"
	I1213 13:05:29.135197  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.135501  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.136091  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.137407  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.138693  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.144462  395903 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:05:29.144762  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.172576  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 13:05:29.173623  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 13:05:29.173690  395903 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 13:05:29.173801  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	W1213 13:05:29.204423  395903 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 13:05:29.210311  395903 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 13:05:29.211457  395903 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:05:29.211478  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 13:05:29.211691  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.215244  395903 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 13:05:29.216270  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 13:05:29.216305  395903 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:05:29.216768  395903 addons.go:239] Setting addon default-storageclass=true in "addons-802674"
	I1213 13:05:29.216931  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.216322  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 13:05:29.217290  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.217844  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.223755  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 13:05:29.224143  395903 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 13:05:29.225242  395903 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:05:29.225260  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 13:05:29.225329  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.226990  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 13:05:29.228045  395903 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 13:05:29.229174  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 13:05:29.229241  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 13:05:29.229252  395903 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 13:05:29.229315  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.230874  395903 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-802674"
	I1213 13:05:29.230924  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.230978  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 13:05:29.231426  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:29.232657  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 13:05:29.232703  395903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:05:29.233729  395903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:05:29.233808  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:05:29.234437  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.237630  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 13:05:29.238957  395903 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 13:05:29.238987  395903 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 13:05:29.239040  395903 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 13:05:29.240143  395903 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:05:29.240162  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 13:05:29.240217  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.240322  395903 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 13:05:29.240329  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 13:05:29.240357  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.240684  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 13:05:29.240729  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 13:05:29.240790  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.246676  395903 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 13:05:29.247790  395903 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 13:05:29.248760  395903 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 13:05:29.248837  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 13:05:29.248912  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.253461  395903 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 13:05:29.254397  395903 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:05:29.254448  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 13:05:29.254539  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.257575  395903 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 13:05:29.258483  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 13:05:29.258502  395903 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 13:05:29.258555  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.263247  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:29.264724  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:29.265900  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 13:05:29.268180  395903 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:05:29.268202  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 13:05:29.268262  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.271322  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.277239  395903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:05:29.287070  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.293634  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:29.297993  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.299634  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.299982  395903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:05:29.299998  395903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:05:29.300071  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.302301  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.317844  395903 out.go:179]   - Using image docker.io/busybox:stable
	I1213 13:05:29.318975  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.319050  395903 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 13:05:29.319895  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.320146  395903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:05:29.320169  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 13:05:29.320230  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:29.324007  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.325502  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.326606  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.327963  395903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:05:29.330987  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.331484  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.334451  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.361110  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:29.364577  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	W1213 13:05:29.364989  395903 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 13:05:29.365023  395903 retry.go:31] will retry after 256.098095ms: ssh: handshake failed: EOF
	W1213 13:05:29.365444  395903 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 13:05:29.365456  395903 retry.go:31] will retry after 341.621965ms: ssh: handshake failed: EOF
	I1213 13:05:29.469880  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 13:05:29.469905  395903 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 13:05:29.477882  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 13:05:29.485210  395903 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 13:05:29.485237  395903 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 13:05:29.487425  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 13:05:29.497146  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:05:29.507221  395903 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 13:05:29.507255  395903 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 13:05:29.507410  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 13:05:29.508044  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 13:05:29.508064  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 13:05:29.509580  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 13:05:29.509595  395903 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 13:05:29.512578  395903 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 13:05:29.512649  395903 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 13:05:29.539221  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 13:05:29.540683  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 13:05:29.547094  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 13:05:29.547120  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 13:05:29.548241  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 13:05:29.548281  395903 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 13:05:29.550930  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 13:05:29.551972  395903 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 13:05:29.551990  395903 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 13:05:29.562028  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 13:05:29.562051  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 13:05:29.564636  395903 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:05:29.564653  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 13:05:29.568408  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 13:05:29.601185  395903 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:05:29.601210  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 13:05:29.605884  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 13:05:29.605906  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 13:05:29.607158  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 13:05:29.607175  395903 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 13:05:29.610236  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 13:05:29.610296  395903 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 13:05:29.611990  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 13:05:29.651390  395903 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:05:29.651420  395903 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 13:05:29.653096  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 13:05:29.653336  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 13:05:29.655405  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 13:05:29.663041  395903 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:29.663067  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 13:05:29.675602  395903 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 13:05:29.678060  395903 node_ready.go:35] waiting up to 6m0s for node "addons-802674" to be "Ready" ...
	I1213 13:05:29.681198  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 13:05:29.707993  395903 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 13:05:29.708179  395903 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 13:05:29.726692  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:29.777839  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 13:05:29.777977  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 13:05:29.845269  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 13:05:29.845363  395903 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 13:05:29.847321  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:05:29.940448  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 13:05:29.940469  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 13:05:29.963096  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 13:05:29.990226  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 13:05:29.990322  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 13:05:30.026392  395903 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:05:30.026422  395903 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 13:05:30.083300  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 13:05:30.182602  395903 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-802674" context rescaled to 1 replicas
	I1213 13:05:30.688005  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.1472861s)
	I1213 13:05:30.688042  395903 addons.go:495] Verifying addon ingress=true in "addons-802674"
	I1213 13:05:30.688076  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.137111414s)
	I1213 13:05:30.688232  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.119797975s)
	I1213 13:05:30.688277  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.076265801s)
	I1213 13:05:30.688299  395903 addons.go:495] Verifying addon registry=true in "addons-802674"
	I1213 13:05:30.688409  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.00714422s)
	I1213 13:05:30.688433  395903 addons.go:495] Verifying addon metrics-server=true in "addons-802674"
	I1213 13:05:30.688346  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.032909298s)
	I1213 13:05:30.690246  395903 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-802674 service yakd-dashboard -n yakd-dashboard
	
	I1213 13:05:30.690248  395903 out.go:179] * Verifying registry addon...
	I1213 13:05:30.690255  395903 out.go:179] * Verifying ingress addon...
	I1213 13:05:30.692651  395903 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 13:05:30.692651  395903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 13:05:30.695080  395903 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 13:05:30.695172  395903 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:05:30.695190  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:31.102393  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.375605615s)
	I1213 13:05:31.102448  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255018653s)
	W1213 13:05:31.102454  395903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:05:31.102484  395903 retry.go:31] will retry after 372.531156ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 13:05:31.102522  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.139394962s)
	I1213 13:05:31.102830  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.019488566s)
	I1213 13:05:31.102863  395903 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-802674"
	I1213 13:05:31.104421  395903 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 13:05:31.108397  395903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 13:05:31.111456  395903 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	W1213 13:05:31.111480  395903 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1213 13:05:31.111480  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:31.195374  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:31.195533  395903 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 13:05:31.195551  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:31.475305  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 13:05:31.612419  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:31.682224  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:31.695992  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:31.696250  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:32.111177  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:32.195713  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:32.195937  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:32.612358  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:32.695128  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:32.695165  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:33.111381  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:33.196038  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:33.196038  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:33.611117  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:33.695884  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:33.696009  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:33.940181  395903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.464812412s)
	I1213 13:05:34.111362  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:34.182044  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:34.195202  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:34.195337  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:34.612312  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:34.695372  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:34.695523  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:35.122213  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:35.196224  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:35.196437  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:35.613537  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:35.695050  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:35.695373  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:36.111049  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:36.195494  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:36.195648  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:36.611747  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:36.682328  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:36.695367  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:36.695527  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:36.907705  395903 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 13:05:36.907769  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:36.925766  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:37.026412  395903 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 13:05:37.038570  395903 addons.go:239] Setting addon gcp-auth=true in "addons-802674"
	I1213 13:05:37.038630  395903 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:05:37.039025  395903 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:05:37.055946  395903 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 13:05:37.056006  395903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:05:37.072483  395903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:05:37.112021  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:37.164906  395903 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 13:05:37.166089  395903 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 13:05:37.167021  395903 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 13:05:37.167039  395903 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 13:05:37.179676  395903 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 13:05:37.179696  395903 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 13:05:37.192068  395903 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:05:37.192085  395903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 13:05:37.195079  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:37.195201  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:37.205029  395903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 13:05:37.490706  395903 addons.go:495] Verifying addon gcp-auth=true in "addons-802674"
	I1213 13:05:37.492096  395903 out.go:179] * Verifying gcp-auth addon...
	I1213 13:05:37.494573  395903 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 13:05:37.496697  395903 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 13:05:37.496715  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:37.611804  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:37.695711  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:37.695728  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:37.997549  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:38.111146  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:38.196056  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:38.196166  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:38.498069  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:38.612271  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:38.696024  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:38.696241  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:38.998211  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:39.111913  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:39.181513  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:39.195815  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:39.196033  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:39.498078  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:39.611876  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:39.695746  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:39.695939  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:39.997692  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:40.111317  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:40.195377  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:40.195516  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:40.498737  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:40.611498  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:40.695165  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:40.695619  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:40.998254  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:41.111877  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:41.195689  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:41.195972  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:41.497901  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:41.611393  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:41.681846  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:41.696249  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:41.696343  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:41.998056  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:42.111543  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:42.195965  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:42.196136  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:42.497803  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:42.611398  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:42.695375  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:42.695599  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:42.997346  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:43.112108  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:43.195955  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:43.196111  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:43.498325  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:43.611920  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:43.695827  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:43.695918  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:43.997547  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:44.111444  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:44.182082  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:44.195663  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:44.195906  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:44.497399  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:44.612080  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:44.695624  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:44.695719  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:44.997482  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:45.111107  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:45.195897  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:45.196081  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:45.498074  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:45.611810  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:45.695383  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:45.695530  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:45.998510  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:46.111308  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:46.195225  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:46.195295  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:46.498375  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:46.612280  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:46.681821  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:46.695220  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:46.695272  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:46.998192  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:47.111878  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:47.195674  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:47.195823  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:47.497798  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:47.611441  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:47.695592  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:47.695632  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:47.998482  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:48.111166  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:48.196260  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:48.196306  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:48.498109  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:48.611843  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:48.695658  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:48.695845  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:48.997583  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:49.111365  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:49.181856  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:49.195104  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:49.195221  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:49.498108  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:49.612089  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:49.695355  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:49.695397  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:49.997314  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:50.111900  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:50.195621  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:50.195886  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:50.497982  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:50.611892  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:50.695797  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:50.695975  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:50.997855  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:51.111578  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:51.182071  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:51.195355  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:51.195520  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:51.498350  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:51.611195  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:51.694960  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:51.695182  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:51.998352  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:52.112115  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:52.195315  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:52.195434  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:52.498454  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:52.611102  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:52.695672  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:52.695740  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:52.997399  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:53.110769  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:53.195289  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:53.195337  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:53.498414  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:53.612232  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:53.682043  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:53.695344  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:53.695573  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:53.997651  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:54.111381  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:54.194968  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:54.195201  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:54.497898  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:54.611661  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:54.695550  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:54.695826  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:54.997548  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:55.111128  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:55.195089  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:55.195288  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:55.498092  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:55.611930  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:55.695877  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:55.695966  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:55.997705  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:56.111106  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:56.181731  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:56.195889  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:56.196152  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:56.497515  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:56.611148  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:56.694973  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:56.695202  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:56.998052  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:57.111722  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:57.195416  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:57.195544  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:57.498272  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:57.611935  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:57.695896  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:57.696083  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:57.997466  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:58.111189  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:05:58.182112  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:05:58.195495  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:58.195690  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:58.497248  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:58.611965  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:58.695916  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:58.696003  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:58.998106  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:59.111897  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:59.195850  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:59.196044  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:59.498039  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:05:59.611855  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:05:59.695613  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:05:59.695818  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:05:59.997670  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:00.111410  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:00.195382  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:00.195614  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:00.497553  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:00.611074  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:00.681522  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:00.695716  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:00.695974  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:00.997278  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:01.111672  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:01.195331  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:01.195392  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:01.498345  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:01.611129  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:01.696175  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:01.696243  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:01.997995  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:02.111793  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.195494  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:02.195592  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:02.498206  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:02.612027  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:02.695652  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:02.695881  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:02.997603  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:03.111149  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:03.181465  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:03.195734  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:03.195852  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:03.497938  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:03.611720  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:03.695462  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:03.695506  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:03.998207  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:04.111876  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:04.195448  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:04.195698  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:04.497797  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:04.611584  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:04.695662  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:04.695762  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:04.997765  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:05.111418  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:05.181961  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:05.195300  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:05.195347  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:05.498230  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:05.611940  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:05.695616  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:05.697563  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:05.997325  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:06.111995  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:06.195717  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:06.195907  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.497620  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:06.611393  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:06.696082  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:06.696181  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:06.998220  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:07.111725  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:07.182198  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:07.195764  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:07.196008  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:07.498466  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:07.611096  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:07.696322  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:07.696591  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:07.997084  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:08.111824  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:08.195846  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:08.196147  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:08.497709  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:08.611299  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:08.695233  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:08.695452  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:08.997413  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:09.111977  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:09.195808  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:09.196022  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.498042  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:09.611614  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 13:06:09.682233  395903 node_ready.go:57] node "addons-802674" has "Ready":"False" status (will retry)
	I1213 13:06:09.695591  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:09.695856  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:09.997660  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:10.111324  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:10.181969  395903 node_ready.go:49] node "addons-802674" is "Ready"
	I1213 13:06:10.182014  395903 node_ready.go:38] duration metric: took 40.503331317s for node "addons-802674" to be "Ready" ...
	I1213 13:06:10.182036  395903 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:06:10.182108  395903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:06:10.198326  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:10.198400  395903 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 13:06:10.198423  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:10.203127  395903 api_server.go:72] duration metric: took 41.079530905s to wait for apiserver process to appear ...
	I1213 13:06:10.203153  395903 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:06:10.203183  395903 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 13:06:10.209790  395903 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 13:06:10.210698  395903 api_server.go:141] control plane version: v1.34.2
	I1213 13:06:10.210725  395903 api_server.go:131] duration metric: took 7.563433ms to wait for apiserver health ...
	I1213 13:06:10.210739  395903 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:06:10.219632  395903 system_pods.go:59] 20 kube-system pods found
	I1213 13:06:10.219658  395903 system_pods.go:61] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending
	I1213 13:06:10.219676  395903 system_pods.go:61] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.219680  395903 system_pods.go:61] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending
	I1213 13:06:10.219686  395903 system_pods.go:61] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending
	I1213 13:06:10.219689  395903 system_pods.go:61] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending
	I1213 13:06:10.219693  395903 system_pods.go:61] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.219696  395903 system_pods.go:61] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.219700  395903 system_pods.go:61] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.219709  395903 system_pods.go:61] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.219714  395903 system_pods.go:61] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending
	I1213 13:06:10.219719  395903 system_pods.go:61] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.219726  395903 system_pods.go:61] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.219733  395903 system_pods.go:61] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending
	I1213 13:06:10.219741  395903 system_pods.go:61] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending
	I1213 13:06:10.219748  395903 system_pods.go:61] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.219789  395903 system_pods.go:61] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.219801  395903 system_pods.go:61] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending
	I1213 13:06:10.219807  395903 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending
	I1213 13:06:10.219812  395903 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending
	I1213 13:06:10.219817  395903 system_pods.go:61] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending
	I1213 13:06:10.219826  395903 system_pods.go:74] duration metric: took 9.079384ms to wait for pod list to return data ...
	I1213 13:06:10.219846  395903 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:06:10.221630  395903 default_sa.go:45] found service account: "default"
	I1213 13:06:10.221657  395903 default_sa.go:55] duration metric: took 1.804296ms for default service account to be created ...
	I1213 13:06:10.221668  395903 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:06:10.230912  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.230942  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending
	I1213 13:06:10.230956  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.230963  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending
	I1213 13:06:10.230970  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending
	I1213 13:06:10.230975  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending
	I1213 13:06:10.230985  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.230991  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.231000  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.231005  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.231018  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.231027  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.231033  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.231041  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending
	I1213 13:06:10.231046  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending
	I1213 13:06:10.231054  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.231066  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.231071  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending
	I1213 13:06:10.231077  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending
	I1213 13:06:10.231084  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending
	I1213 13:06:10.231088  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending
	I1213 13:06:10.231108  395903 retry.go:31] will retry after 296.337411ms: missing components: kube-dns
	I1213 13:06:10.499281  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:10.603585  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.603629  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:10.603643  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.603654  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:10.603668  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:10.603689  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:10.603701  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.603709  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.603717  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.603726  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.603735  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.603740  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.603746  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.603753  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:10.603762  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:10.603770  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.603798  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.603805  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:10.603820  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.603834  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.603841  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:10.603869  395903 retry.go:31] will retry after 238.442167ms: missing components: kube-dns
	I1213 13:06:10.699072  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:10.699164  395903 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 13:06:10.699181  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:10.699182  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:10.847374  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:10.847408  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:10.847417  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:10.847424  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:10.847430  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:10.847436  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:10.847440  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:10.847446  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:10.847451  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:10.847456  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:10.847461  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:10.847467  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:10.847471  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:10.847475  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:10.847480  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:10.847488  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:10.847493  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:10.847500  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:10.847512  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.847518  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:10.847524  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:10.847540  395903 retry.go:31] will retry after 354.737324ms: missing components: kube-dns
	I1213 13:06:10.997888  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:11.112637  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:11.197150  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:11.197179  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:11.207243  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:11.207278  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:11.207287  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:06:11.207298  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:11.207306  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:11.207314  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:11.207320  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:11.207327  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:11.207340  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:11.207346  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:11.207357  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:11.207366  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:11.207372  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:11.207383  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:11.207391  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:11.207400  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:11.207414  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:11.207422  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:11.207430  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.207437  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.207444  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:06:11.207467  395903 retry.go:31] will retry after 510.78588ms: missing components: kube-dns
	I1213 13:06:11.497836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:11.612215  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:11.696603  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:11.697183  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:11.723320  395903 system_pods.go:86] 20 kube-system pods found
	I1213 13:06:11.723361  395903 system_pods.go:89] "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 13:06:11.723371  395903 system_pods.go:89] "coredns-66bc5c9577-bqhwx" [6e6787a3-4665-472f-8a18-3c930bf5db5d] Running
	I1213 13:06:11.723381  395903 system_pods.go:89] "csi-hostpath-attacher-0" [129d11a0-a7f2-496e-8aff-8e11fcb6fb13] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 13:06:11.723389  395903 system_pods.go:89] "csi-hostpath-resizer-0" [57b8fb38-5390-4b4b-9f7c-4d1a77340190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 13:06:11.723398  395903 system_pods.go:89] "csi-hostpathplugin-hzzp2" [36eb2635-7c1e-4325-a786-a3b95ca71a86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 13:06:11.723405  395903 system_pods.go:89] "etcd-addons-802674" [f7a0a060-f323-418a-9a4a-6c6eefe00b21] Running
	I1213 13:06:11.723412  395903 system_pods.go:89] "kindnet-fctx2" [5f957208-1d1f-4aeb-bca9-523b32917426] Running
	I1213 13:06:11.723429  395903 system_pods.go:89] "kube-apiserver-addons-802674" [06bad0dc-8b07-472f-8f2d-2971df4f51f1] Running
	I1213 13:06:11.723436  395903 system_pods.go:89] "kube-controller-manager-addons-802674" [fa7e91b8-ce72-44b7-8357-f51092368fe7] Running
	I1213 13:06:11.723586  395903 system_pods.go:89] "kube-ingress-dns-minikube" [0839f500-727a-4f58-89b3-befe4823e506] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 13:06:11.723600  395903 system_pods.go:89] "kube-proxy-2ss46" [bf960d04-5c48-4e0d-816c-31c2092f80a0] Running
	I1213 13:06:11.723607  395903 system_pods.go:89] "kube-scheduler-addons-802674" [6be30b60-2b6f-4625-a35f-fa91972f8f6a] Running
	I1213 13:06:11.723623  395903 system_pods.go:89] "metrics-server-85b7d694d7-lmm9f" [0e2bcdb7-46b3-4d40-ab19-396aa47a4f0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 13:06:11.723635  395903 system_pods.go:89] "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 13:06:11.723646  395903 system_pods.go:89] "registry-6b586f9694-8nh6x" [026d87d5-39ae-4470-87b4-17ae3e729d61] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 13:06:11.723656  395903 system_pods.go:89] "registry-creds-764b6fb674-vppgx" [285d2b4d-1e18-410a-9ac4-2fe91c56bfd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 13:06:11.723667  395903 system_pods.go:89] "registry-proxy-q4bmk" [558bd886-2608-4e2a-b513-906ab0a12e90] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 13:06:11.723676  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mqxdv" [cb2184ed-d11b-48d0-9eed-22dc7d0ff650] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.723685  395903 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzsxs" [c412ecb7-14e7-4e98-bdff-eaf1d5ef351f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 13:06:11.723694  395903 system_pods.go:89] "storage-provisioner" [72c4a30a-0415-4cab-92ce-6e20600ca8b1] Running
	I1213 13:06:11.723706  395903 system_pods.go:126] duration metric: took 1.50203012s to wait for k8s-apps to be running ...
	I1213 13:06:11.723720  395903 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:06:11.723823  395903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:06:11.739971  395903 system_svc.go:56] duration metric: took 16.243997ms WaitForService to wait for kubelet
	I1213 13:06:11.740002  395903 kubeadm.go:587] duration metric: took 42.616407317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:06:11.740024  395903 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:06:11.742932  395903 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:06:11.742961  395903 node_conditions.go:123] node cpu capacity is 8
	I1213 13:06:11.742979  395903 node_conditions.go:105] duration metric: took 2.948877ms to run NodePressure ...
	I1213 13:06:11.742997  395903 start.go:242] waiting for startup goroutines ...
	I1213 13:06:12.002765  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:12.112072  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:12.196010  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:12.196175  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:12.498958  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:12.612187  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:12.696152  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:12.696203  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:12.998528  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:13.111646  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:13.196142  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:13.196180  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:13.499037  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:13.615112  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:13.697442  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:13.697723  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:13.999214  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:14.113146  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:14.196518  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:14.197025  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:14.498328  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:14.612630  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:14.713534  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:14.713687  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:14.998351  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:15.112002  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:15.196057  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:15.196119  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:15.498248  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:15.613053  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:15.696124  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:15.696299  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:15.998988  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:16.112447  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:16.196651  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:16.196763  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:16.498190  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:16.612750  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:16.774262  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:16.774280  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:16.998789  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:17.112279  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.195922  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:17.195922  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:17.498287  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:17.612698  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:17.696596  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:17.696668  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:17.998468  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:18.112845  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:18.197074  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:18.197195  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:18.498815  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:18.612310  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:18.696670  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:18.696709  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:18.999106  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:19.114615  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:19.196538  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:19.196612  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.498092  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:19.611880  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:19.695624  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:19.695880  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:19.997579  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:20.111994  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:20.196520  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:20.196829  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:20.498383  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:20.612298  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:20.696097  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:20.696218  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.033230  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:21.112226  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:21.196559  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.196640  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:21.497448  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:21.613025  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:21.696099  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:21.696499  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:21.999695  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:22.111974  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:22.212221  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:22.212264  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:22.499024  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:22.612572  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:22.696605  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:22.696888  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:22.998338  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:23.113037  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:23.196040  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:23.196113  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:23.498611  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:23.611493  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:23.696226  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:23.696374  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:23.998764  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:24.112210  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:24.196478  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:24.196521  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:24.497821  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:24.612513  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:24.696795  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:24.696823  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:24.998836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:25.112021  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:25.196282  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:25.196415  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:25.498244  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:25.612163  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:25.695738  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:25.696037  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:25.997990  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:26.112005  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:26.195752  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:26.195821  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:26.497983  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:26.612554  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:26.706583  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:26.706638  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:26.998263  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:27.114273  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:27.196357  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:27.196395  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:27.498599  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:27.611770  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:27.695450  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:27.695492  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:27.998756  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:28.111630  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.196430  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:28.196641  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:28.497910  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:28.612336  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:28.696420  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:28.696523  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:28.999406  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:29.113159  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:29.196473  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:29.196524  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.498006  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:29.612644  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:29.696859  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:29.696925  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:29.997869  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:30.111933  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:30.195974  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:30.196120  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:30.498380  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:30.613034  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:30.713492  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:30.713520  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.004098  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:31.112701  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:31.196726  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.196759  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:31.498073  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:31.612386  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:31.713276  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:31.713328  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:31.998810  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.111790  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:32.196470  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.196602  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.499133  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:32.612232  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:32.696287  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:32.696412  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:32.999334  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:33.112657  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:33.196742  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:33.196876  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:33.497682  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:33.611626  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:33.696463  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:33.696474  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:33.999629  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:34.111900  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:34.197060  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:34.197261  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:34.498973  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:34.612192  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:34.696048  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:34.696141  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:34.998730  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:35.114184  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:35.198357  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:35.199693  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:35.499849  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:35.612880  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:35.697947  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:35.698138  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 13:06:35.998901  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:36.114017  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:36.196726  395903 kapi.go:107] duration metric: took 1m5.504070223s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 13:06:36.196823  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:36.497747  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:36.611847  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:36.698150  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:36.998255  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.112795  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:37.196905  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.517019  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:37.611906  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:37.695732  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:37.997836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:38.112505  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:38.196871  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:38.497632  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:38.611623  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:38.696716  395903 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 13:06:38.998156  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:39.112185  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:39.196587  395903 kapi.go:107] duration metric: took 1m8.503933393s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 13:06:39.498259  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:39.612162  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.002836  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:40.114414  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.498934  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:40.612298  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:40.999043  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.112500  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:41.497677  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:41.611936  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:41.999847  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:42.112856  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:42.497628  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:42.612017  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:42.998450  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:43.112528  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:43.498101  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:43.748308  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:43.998387  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:44.112558  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:44.498332  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 13:06:44.612759  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:44.998218  395903 kapi.go:107] duration metric: took 1m7.503669997s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 13:06:44.999896  395903 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-802674 cluster.
	I1213 13:06:45.001269  395903 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 13:06:45.002589  395903 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 13:06:45.113208  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:45.611942  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:46.113218  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:46.611724  395903 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 13:06:47.112168  395903 kapi.go:107] duration metric: took 1m16.003768664s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 13:06:47.113752  395903 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, registry-creds, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1213 13:06:47.115166  395903 addons.go:530] duration metric: took 1m17.991519282s for enable addons: enabled=[nvidia-device-plugin ingress-dns registry-creds amd-gpu-device-plugin storage-provisioner cloud-spanner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1213 13:06:47.115212  395903 start.go:247] waiting for cluster config update ...
	I1213 13:06:47.115232  395903 start.go:256] writing updated cluster config ...
	I1213 13:06:47.115485  395903 ssh_runner.go:195] Run: rm -f paused
	I1213 13:06:47.119601  395903 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:06:47.122462  395903 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bqhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.126247  395903 pod_ready.go:94] pod "coredns-66bc5c9577-bqhwx" is "Ready"
	I1213 13:06:47.126266  395903 pod_ready.go:86] duration metric: took 3.783169ms for pod "coredns-66bc5c9577-bqhwx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.128163  395903 pod_ready.go:83] waiting for pod "etcd-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.131322  395903 pod_ready.go:94] pod "etcd-addons-802674" is "Ready"
	I1213 13:06:47.131343  395903 pod_ready.go:86] duration metric: took 3.161341ms for pod "etcd-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.132975  395903 pod_ready.go:83] waiting for pod "kube-apiserver-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.136230  395903 pod_ready.go:94] pod "kube-apiserver-addons-802674" is "Ready"
	I1213 13:06:47.136249  395903 pod_ready.go:86] duration metric: took 3.254569ms for pod "kube-apiserver-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.137934  395903 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.523552  395903 pod_ready.go:94] pod "kube-controller-manager-addons-802674" is "Ready"
	I1213 13:06:47.523587  395903 pod_ready.go:86] duration metric: took 385.634772ms for pod "kube-controller-manager-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:47.723844  395903 pod_ready.go:83] waiting for pod "kube-proxy-2ss46" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.123600  395903 pod_ready.go:94] pod "kube-proxy-2ss46" is "Ready"
	I1213 13:06:48.123630  395903 pod_ready.go:86] duration metric: took 399.760698ms for pod "kube-proxy-2ss46" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.323829  395903 pod_ready.go:83] waiting for pod "kube-scheduler-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.723833  395903 pod_ready.go:94] pod "kube-scheduler-addons-802674" is "Ready"
	I1213 13:06:48.723871  395903 pod_ready.go:86] duration metric: took 400.014637ms for pod "kube-scheduler-addons-802674" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:06:48.723889  395903 pod_ready.go:40] duration metric: took 1.604253671s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:06:48.769108  395903 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:06:48.771066  395903 out.go:179] * Done! kubectl is now configured to use "addons-802674" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:06:46 addons-802674 crio[777]: time="2025-12-13T13:06:46.386499566Z" level=info msg="Starting container: efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8" id=637c47e6-a577-4106-8437-ef5115fe7694 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:06:46 addons-802674 crio[777]: time="2025-12-13T13:06:46.388904009Z" level=info msg="Started container" PID=6045 containerID=efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8 description=kube-system/csi-hostpathplugin-hzzp2/csi-snapshotter id=637c47e6-a577-4106-8437-ef5115fe7694 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed596f38d518b0501313c194bf3695d9802826720ca79cde7184616be41b907d
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.829483483Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ec296834-17a7-4a4c-abb5-77d2fc0d7961 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.829579904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.836612924Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3b163c12e079df419ef213126fc0ae341b9c99b3f243895a4567648a726c5960 UID:29062754-a680-492d-be0c-824bf09da2ed NetNS:/var/run/netns/0b0fc999-b7e3-4e64-aa57-4491b02ed80b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aaf0}] Aliases:map[]}"
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.836642186Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.846597352Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3b163c12e079df419ef213126fc0ae341b9c99b3f243895a4567648a726c5960 UID:29062754-a680-492d-be0c-824bf09da2ed NetNS:/var/run/netns/0b0fc999-b7e3-4e64-aa57-4491b02ed80b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aaf0}] Aliases:map[]}"
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.846710682Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.847459536Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.848268741Z" level=info msg="Ran pod sandbox 3b163c12e079df419ef213126fc0ae341b9c99b3f243895a4567648a726c5960 with infra container: default/busybox/POD" id=ec296834-17a7-4a4c-abb5-77d2fc0d7961 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.849312494Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=307e265f-3ba0-4782-b228-53ba34053fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.849431837Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=307e265f-3ba0-4782-b228-53ba34053fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.849467545Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=307e265f-3ba0-4782-b228-53ba34053fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.850084292Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=459737cf-12f0-468c-a1ca-98e86df0298d name=/runtime.v1.ImageService/PullImage
	Dec 13 13:06:51 addons-802674 crio[777]: time="2025-12-13T13:06:51.851452697Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.44031733Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=459737cf-12f0-468c-a1ca-98e86df0298d name=/runtime.v1.ImageService/PullImage
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.440937012Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=55316e06-bde5-42cb-8d59-d605efae2ae1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.442280262Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a880479f-f029-4021-82a4-1351ef875828 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.445981816Z" level=info msg="Creating container: default/busybox/busybox" id=24140edd-b5da-49be-8712-5f9110381f1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.446124585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.451680156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.452167693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.483922974Z" level=info msg="Created container 76bc6f0c50f5e8bc6dfc85af787cdb1e478741b792e5f86fd325cf0201d9973d: default/busybox/busybox" id=24140edd-b5da-49be-8712-5f9110381f1b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.484571303Z" level=info msg="Starting container: 76bc6f0c50f5e8bc6dfc85af787cdb1e478741b792e5f86fd325cf0201d9973d" id=2da6fc67-5080-4469-ae10-3538b02944f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:06:52 addons-802674 crio[777]: time="2025-12-13T13:06:52.48684183Z" level=info msg="Started container" PID=6165 containerID=76bc6f0c50f5e8bc6dfc85af787cdb1e478741b792e5f86fd325cf0201d9973d description=default/busybox/busybox id=2da6fc67-5080-4469-ae10-3538b02944f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b163c12e079df419ef213126fc0ae341b9c99b3f243895a4567648a726c5960
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	76bc6f0c50f5e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   3b163c12e079d       busybox                                     default
	efa46cf269b56       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	ae27723662583       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	25c2ccc8d56eb       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago       Running             liveness-probe                           0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	500e4d8d6d926       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   f5e99866f8e93       gcp-auth-78565c9fb4-x58fn                   gcp-auth
	00b38c263e000       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	75d9ddc062ec2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            18 seconds ago       Running             gadget                                   0                   1b12f8c73b1a6       gadget-2rht9                                gadget
	6df323a2878de       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	909795cf069ef       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             21 seconds ago       Running             controller                               0                   31c05640213ab       ingress-nginx-controller-85d4c799dd-pqrjc   ingress-nginx
	c5db025aa30e9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago       Running             registry-proxy                           0                   3b77c60f1e6f2       registry-proxy-q4bmk                        kube-system
	263b6770119de       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   26 seconds ago       Running             csi-external-health-monitor-controller   0                   ed596f38d518b       csi-hostpathplugin-hzzp2                    kube-system
	f08ae0fc41016       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     27 seconds ago       Running             amd-gpu-device-plugin                    0                   b524b8787f620       amd-gpu-device-plugin-jrjdp                 kube-system
	6ab7dd35b300d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   28 seconds ago       Exited              patch                                    0                   73989b7a8c617       gcp-auth-certs-patch-hbxtn                  gcp-auth
	6d85d43816c0e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             28 seconds ago       Running             csi-attacher                             0                   d1c120534dc2e       csi-hostpath-attacher-0                     kube-system
	40aee451d49aa       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   f9f4611822449       snapshot-controller-7d9fbc56b8-mqxdv        kube-system
	7f147ccf5e501       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     29 seconds ago       Running             nvidia-device-plugin-ctr                 0                   2aea74ac27707       nvidia-device-plugin-daemonset-bldsd        kube-system
	a9051d728dbfa       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago       Running             csi-resizer                              0                   108b51d55bc13       csi-hostpath-resizer-0                      kube-system
	372b2eedcace7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   33 seconds ago       Exited              patch                                    0                   ef0d7f8d0e3fb       ingress-nginx-admission-patch-kh6b6         ingress-nginx
	b0217f9dbb79d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   33 seconds ago       Exited              create                                   0                   02d377aacf5a0       gcp-auth-certs-create-df68w                 gcp-auth
	fc7d97af030f5       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        34 seconds ago       Running             metrics-server                           0                   158c513c1741c       metrics-server-85b7d694d7-lmm9f             kube-system
	f4ac5ed0bb71a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      36 seconds ago       Running             volume-snapshot-controller               0                   07815e046b576       snapshot-controller-7d9fbc56b8-nzsxs        kube-system
	04be5533939ef       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              36 seconds ago       Running             yakd                                     0                   ff5d556899332       yakd-dashboard-5ff678cb9-l5tbt              yakd-dashboard
	5851ed168deef       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             39 seconds ago       Running             local-path-provisioner                   0                   b8ec7fbdea730       local-path-provisioner-648f6765c9-5vk9k     local-path-storage
	a02eaaa05e8f3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   41 seconds ago       Exited              create                                   0                   ebee5b47a3296       ingress-nginx-admission-create-4vxk5        ingress-nginx
	bb2165f7660fc       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           41 seconds ago       Running             registry                                 0                   3165a87ead941       registry-6b586f9694-8nh6x                   kube-system
	be21f9e65e565       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               43 seconds ago       Running             minikube-ingress-dns                     0                   e0908735a3d11       kube-ingress-dns-minikube                   kube-system
	8da50799f91fb       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               47 seconds ago       Running             cloud-spanner-emulator                   0                   8efd9c2d5215e       cloud-spanner-emulator-5bdddb765-fpvjg      default
	810cfaaa4b781       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             49 seconds ago       Running             storage-provisioner                      0                   f21678d9ca1e5       storage-provisioner                         kube-system
	5eca19a8b70c2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             49 seconds ago       Running             coredns                                  0                   5a22131de8640       coredns-66bc5c9577-bqhwx                    kube-system
	d50cb67d5dec7       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   3c089051a3fb1       kube-proxy-2ss46                            kube-system
	b6315f71701be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   1f55378062026       kindnet-fctx2                               kube-system
	610b806094f38       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   730c82d0ec8ef       etcd-addons-802674                          kube-system
	2a7f427a075b6       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   01befcabcdb91       kube-apiserver-addons-802674                kube-system
	9b7e546540c7c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   af5edd33b4d3e       kube-controller-manager-addons-802674       kube-system
	dba035f34dd51       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   9048f2bd642b0       kube-scheduler-addons-802674                kube-system
	
	
	==> coredns [5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6] <==
	[INFO] 10.244.0.15:54322 - 65504 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000193413s
	[INFO] 10.244.0.15:36363 - 60923 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104434s
	[INFO] 10.244.0.15:36363 - 60665 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116357s
	[INFO] 10.244.0.15:54426 - 44895 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000097889s
	[INFO] 10.244.0.15:54426 - 44621 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000099402s
	[INFO] 10.244.0.15:38398 - 47105 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069546s
	[INFO] 10.244.0.15:38398 - 46867 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000089306s
	[INFO] 10.244.0.15:50979 - 9694 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000086351s
	[INFO] 10.244.0.15:50979 - 9416 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107303s
	[INFO] 10.244.0.15:44092 - 26481 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120123s
	[INFO] 10.244.0.15:44092 - 26730 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166899s
	[INFO] 10.244.0.22:51246 - 64453 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214807s
	[INFO] 10.244.0.22:41298 - 64355 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000301577s
	[INFO] 10.244.0.22:32847 - 24072 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134147s
	[INFO] 10.244.0.22:34413 - 8366 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000187562s
	[INFO] 10.244.0.22:49483 - 22734 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087669s
	[INFO] 10.244.0.22:59667 - 11691 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009607s
	[INFO] 10.244.0.22:35843 - 34063 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004741232s
	[INFO] 10.244.0.22:48981 - 34870 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004852956s
	[INFO] 10.244.0.22:41535 - 19514 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00396088s
	[INFO] 10.244.0.22:48154 - 29342 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005100534s
	[INFO] 10.244.0.22:37334 - 48135 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00472467s
	[INFO] 10.244.0.22:47750 - 28262 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005839182s
	[INFO] 10.244.0.22:44808 - 26686 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000863585s
	[INFO] 10.244.0.22:41672 - 22449 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00133782s
	
	
	==> describe nodes <==
	Name:               addons-802674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-802674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=addons-802674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_05_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-802674
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-802674"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:05:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-802674
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:06:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:06:55 +0000   Sat, 13 Dec 2025 13:05:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:06:55 +0000   Sat, 13 Dec 2025 13:05:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:06:55 +0000   Sat, 13 Dec 2025 13:05:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:06:55 +0000   Sat, 13 Dec 2025 13:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-802674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                919a78b6-9542-415b-ad40-dc5df4183c76
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5bdddb765-fpvjg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gadget                      gadget-2rht9                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gcp-auth                    gcp-auth-78565c9fb4-x58fn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-pqrjc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         90s
	  kube-system                 amd-gpu-device-plugin-jrjdp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-66bc5c9577-bqhwx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     91s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpathplugin-hzzp2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 etcd-addons-802674                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-fctx2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-addons-802674                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-addons-802674        200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-2ss46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-addons-802674                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 metrics-server-85b7d694d7-lmm9f              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         90s
	  kube-system                 nvidia-device-plugin-daemonset-bldsd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 registry-6b586f9694-8nh6x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-creds-764b6fb674-vppgx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-q4bmk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 snapshot-controller-7d9fbc56b8-mqxdv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 snapshot-controller-7d9fbc56b8-nzsxs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  local-path-storage          local-path-provisioner-648f6765c9-5vk9k      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-l5tbt               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node addons-802674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node addons-802674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node addons-802674 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node addons-802674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node addons-802674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node addons-802674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                  node-controller  Node addons-802674 event: Registered Node addons-802674 in Controller
	  Normal  NodeReady                50s                  kubelet          Node addons-802674 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 49 b1 dd f3 80 08 06
	[ +17.900193] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000012] ll header: 00000000: ff ff ff ff ff ff 8a 7a 27 2e 5a 59 08 06
	[ +14.599447] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 4e f3 7e 3b c2 dc 08 06
	[  +0.000332] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 7a 27 2e 5a 59 08 06
	[ +11.875804] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b6 36 87 06 21 41 08 06
	[  +0.053819] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff be c4 f7 a4 8d 16 08 06
	[  +3.408675] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 36 a1 76 99 d5 08 06
	[  +0.000340] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 66 49 b1 dd f3 80 08 06
	[  +3.370005] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 28 b1 ec 74 5a 08 06
	[Dec13 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 16 70 0d f4 be 08 06
	[  +0.000399] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 28 b1 ec 74 5a 08 06
	[ +23.808433] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea b7 dd 32 fb 08 08 06
	[  +0.000396] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be c4 f7 a4 8d 16 08 06
	
	
	==> etcd [610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d] <==
	{"level":"warn","ts":"2025-12-13T13:05:20.462711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.469414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.484565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.490610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.498122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:20.539550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:31.698975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:31.705276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.931756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.938437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.957484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:05:57.963904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:06:21.076128Z","caller":"traceutil/trace.go:172","msg":"trace[861018002] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"102.425338ms","start":"2025-12-13T13:06:20.973682Z","end":"2025-12-13T13:06:21.076108Z","steps":["trace[861018002] 'process raft request'  (duration: 102.310681ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:43.745943Z","caller":"traceutil/trace.go:172","msg":"trace[24304390] linearizableReadLoop","detail":"{readStateIndex:1228; appliedIndex:1228; }","duration":"135.01383ms","start":"2025-12-13T13:06:43.610904Z","end":"2025-12-13T13:06:43.745918Z","steps":["trace[24304390] 'read index received'  (duration: 135.006541ms)","trace[24304390] 'applied index is now lower than readState.Index'  (duration: 5.687µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:06:43.746152Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.214295ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:06:43.746201Z","caller":"traceutil/trace.go:172","msg":"trace[1387126825] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1195; }","duration":"135.299813ms","start":"2025-12-13T13:06:43.610892Z","end":"2025-12-13T13:06:43.746192Z","steps":["trace[1387126825] 'agreement among raft nodes before linearized reading'  (duration: 135.165064ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:43.746195Z","caller":"traceutil/trace.go:172","msg":"trace[1898574573] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"195.681987ms","start":"2025-12-13T13:06:43.550496Z","end":"2025-12-13T13:06:43.746178Z","steps":["trace[1898574573] 'process raft request'  (duration: 195.528882ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:49.330072Z","caller":"traceutil/trace.go:172","msg":"trace[1495101274] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"131.841715ms","start":"2025-12-13T13:06:49.198213Z","end":"2025-12-13T13:06:49.330055Z","steps":["trace[1495101274] 'process raft request'  (duration: 131.807713ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:49.330120Z","caller":"traceutil/trace.go:172","msg":"trace[18349875] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"132.716688ms","start":"2025-12-13T13:06:49.197383Z","end":"2025-12-13T13:06:49.330099Z","steps":["trace[18349875] 'process raft request'  (duration: 132.559615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:06:49.517455Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.000204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T13:06:49.517524Z","caller":"traceutil/trace.go:172","msg":"trace[665132156] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1241; }","duration":"132.083768ms","start":"2025-12-13T13:06:49.385426Z","end":"2025-12-13T13:06:49.517510Z","steps":["trace[665132156] 'range keys from in-memory index tree'  (duration: 131.911759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:06:49.517471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.361607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaims\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-13T13:06:49.517551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.12289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-12-13T13:06:49.517585Z","caller":"traceutil/trace.go:172","msg":"trace[637629767] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1241; }","duration":"132.15912ms","start":"2025-12-13T13:06:49.385416Z","end":"2025-12-13T13:06:49.517575Z","steps":["trace[637629767] 'range keys from in-memory index tree'  (duration: 131.979698ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:06:49.517562Z","caller":"traceutil/trace.go:172","msg":"trace[364224605] range","detail":"{range_begin:/registry/resourceclaims; range_end:; response_count:0; response_revision:1241; }","duration":"123.454512ms","start":"2025-12-13T13:06:49.394094Z","end":"2025-12-13T13:06:49.517548Z","steps":["trace[364224605] 'range keys from in-memory index tree'  (duration: 123.306259ms)"],"step_count":1}
	
	
	==> gcp-auth [500e4d8d6d926896ae927d438c08663bc74a1f62f6f004a9ccf8e29479bc4463] <==
	2025/12/13 13:06:44 GCP Auth Webhook started!
	2025/12/13 13:06:49 Ready to marshal response ...
	2025/12/13 13:06:49 Ready to write response ...
	2025/12/13 13:06:51 Ready to marshal response ...
	2025/12/13 13:06:51 Ready to write response ...
	2025/12/13 13:06:51 Ready to marshal response ...
	2025/12/13 13:06:51 Ready to write response ...
	
	
	==> kernel <==
	 13:07:00 up  1:49,  0 user,  load average: 2.36, 1.43, 1.49
	Linux addons-802674 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523] <==
	I1213 13:05:29.673966       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:05:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:05:29.969197       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:05:29.969227       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:05:29.969239       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:05:29.970086       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 13:05:59.970290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 13:05:59.970293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 13:05:59.970307       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 13:05:59.970290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1213 13:06:01.469360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:06:01.469390       1 metrics.go:72] Registering metrics
	I1213 13:06:01.469448       1 controller.go:711] "Syncing nftables rules"
	I1213 13:06:09.973365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:06:09.973403       1 main.go:301] handling current node
	I1213 13:06:19.968627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:06:19.968656       1 main.go:301] handling current node
	I1213 13:06:29.969135       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:06:29.969174       1 main.go:301] handling current node
	I1213 13:06:39.969032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:06:39.969150       1 main.go:301] handling current node
	I1213 13:06:49.968745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:06:49.968829       1 main.go:301] handling current node
	I1213 13:06:59.969390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 13:06:59.969444       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e] <==
	I1213 13:05:37.439679       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.97.251.54"}
	W1213 13:05:57.931718       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 13:05:57.938373       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 13:05:57.957476       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 13:05:57.963938       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1213 13:06:10.167344       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.167402       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:10.167453       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.167513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:10.184812       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.184945       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:10.186067       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.251.54:443: connect: connection refused
	E1213 13:06:10.186107       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.251.54:443: connect: connection refused" logger="UnhandledError"
	E1213 13:06:27.736403       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	W1213 13:06:27.736467       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 13:06:27.736859       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 13:06:27.737182       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	E1213 13:06:27.741878       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	E1213 13:06:27.762512       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.35.55:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.35.55:443: connect: connection refused" logger="UnhandledError"
	I1213 13:06:27.829279       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 13:06:58.668627       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56160: use of closed network connection
	E1213 13:06:58.817551       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56192: use of closed network connection
	
	
	==> kube-controller-manager [9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2] <==
	I1213 13:05:27.917286       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:05:27.917302       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:05:27.917433       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 13:05:27.917747       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:05:27.918688       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:05:27.919350       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 13:05:27.921662       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:05:27.921758       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:05:27.922976       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:05:27.934323       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 13:05:27.934368       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 13:05:27.934390       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 13:05:27.934401       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 13:05:27.934406       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 13:05:27.935801       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:05:27.940026       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-802674" podCIDRs=["10.244.0.0/24"]
	E1213 13:05:30.432050       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 13:05:57.926263       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 13:05:57.926403       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 13:05:57.926463       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 13:05:57.949166       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 13:05:57.952654       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 13:05:58.027281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:05:58.053635       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:06:12.858178       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6] <==
	I1213 13:05:29.517294       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:05:29.626873       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:05:29.728800       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:05:29.728849       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 13:05:29.728990       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:05:30.101546       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:05:30.101628       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:05:30.159360       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:05:30.208731       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:05:30.209074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:05:30.212113       1 config.go:200] "Starting service config controller"
	I1213 13:05:30.212395       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:05:30.212454       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:05:30.212554       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:05:30.212593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:05:30.213251       1 config.go:309] "Starting node config controller"
	I1213 13:05:30.213302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:05:30.213329       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:05:30.212200       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:05:30.319286       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:05:30.319320       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:05:30.319284       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069] <==
	E1213 13:05:20.927492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:05:20.927569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:05:20.927582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:05:20.927607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:05:20.927680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:05:20.927696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:05:20.927728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:05:20.927618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:05:20.927837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:05:20.927908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:05:20.927963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:05:20.927979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:05:21.730875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:05:21.735023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:05:21.742119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:05:21.868457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:05:21.908573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:05:21.932404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:05:21.932436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:05:21.937696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:05:21.997737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:05:22.013674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:05:22.026683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:05:22.119887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 13:05:23.725423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:06:31 addons-802674 kubelet[1298]: I1213 13:06:31.738143    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-mqxdv" podStartSLOduration=40.659772926 podStartE2EDuration="1m0.738124768s" podCreationTimestamp="2025-12-13 13:05:31 +0000 UTC" firstStartedPulling="2025-12-13 13:06:10.609184672 +0000 UTC m=+47.202781351" lastFinishedPulling="2025-12-13 13:06:30.687536515 +0000 UTC m=+67.281133193" observedRunningTime="2025-12-13 13:06:31.737529218 +0000 UTC m=+68.331125914" watchObservedRunningTime="2025-12-13 13:06:31.738124768 +0000 UTC m=+68.331721463"
	Dec 13 13:06:31 addons-802674 kubelet[1298]: I1213 13:06:31.746245    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-attacher-0" podStartSLOduration=40.8589514 podStartE2EDuration="1m1.746226365s" podCreationTimestamp="2025-12-13 13:05:30 +0000 UTC" firstStartedPulling="2025-12-13 13:06:10.611398915 +0000 UTC m=+47.204995603" lastFinishedPulling="2025-12-13 13:06:31.498673892 +0000 UTC m=+68.092270568" observedRunningTime="2025-12-13 13:06:31.746038632 +0000 UTC m=+68.339635329" watchObservedRunningTime="2025-12-13 13:06:31.746226365 +0000 UTC m=+68.339823061"
	Dec 13 13:06:33 addons-802674 kubelet[1298]: I1213 13:06:33.739558    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jrjdp" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:06:33 addons-802674 kubelet[1298]: I1213 13:06:33.750277    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-jrjdp" podStartSLOduration=1.606233675 podStartE2EDuration="23.750250516s" podCreationTimestamp="2025-12-13 13:06:10 +0000 UTC" firstStartedPulling="2025-12-13 13:06:10.612349595 +0000 UTC m=+47.205946287" lastFinishedPulling="2025-12-13 13:06:32.756366437 +0000 UTC m=+69.349963128" observedRunningTime="2025-12-13 13:06:33.749888021 +0000 UTC m=+70.343484717" watchObservedRunningTime="2025-12-13 13:06:33.750250516 +0000 UTC m=+70.343847211"
	Dec 13 13:06:33 addons-802674 kubelet[1298]: I1213 13:06:33.861069    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkw7k\" (UniqueName: \"kubernetes.io/projected/f54fc6c8-29bc-41d7-b9f2-eb746bc4641f-kube-api-access-tkw7k\") pod \"f54fc6c8-29bc-41d7-b9f2-eb746bc4641f\" (UID: \"f54fc6c8-29bc-41d7-b9f2-eb746bc4641f\") "
	Dec 13 13:06:33 addons-802674 kubelet[1298]: I1213 13:06:33.863770    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54fc6c8-29bc-41d7-b9f2-eb746bc4641f-kube-api-access-tkw7k" (OuterVolumeSpecName: "kube-api-access-tkw7k") pod "f54fc6c8-29bc-41d7-b9f2-eb746bc4641f" (UID: "f54fc6c8-29bc-41d7-b9f2-eb746bc4641f"). InnerVolumeSpecName "kube-api-access-tkw7k". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 13:06:33 addons-802674 kubelet[1298]: I1213 13:06:33.961941    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkw7k\" (UniqueName: \"kubernetes.io/projected/f54fc6c8-29bc-41d7-b9f2-eb746bc4641f-kube-api-access-tkw7k\") on node \"addons-802674\" DevicePath \"\""
	Dec 13 13:06:34 addons-802674 kubelet[1298]: I1213 13:06:34.746078    1298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73989b7a8c617908b94f96c8f30cde1444d20c097eb5dbf6979762246b01c64c"
	Dec 13 13:06:34 addons-802674 kubelet[1298]: I1213 13:06:34.746333    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jrjdp" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:06:35 addons-802674 kubelet[1298]: I1213 13:06:35.753043    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q4bmk" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:06:35 addons-802674 kubelet[1298]: I1213 13:06:35.837122    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-q4bmk" podStartSLOduration=1.676702916 podStartE2EDuration="25.837096853s" podCreationTimestamp="2025-12-13 13:06:10 +0000 UTC" firstStartedPulling="2025-12-13 13:06:10.618962786 +0000 UTC m=+47.212559462" lastFinishedPulling="2025-12-13 13:06:34.779356712 +0000 UTC m=+71.372953399" observedRunningTime="2025-12-13 13:06:35.83647107 +0000 UTC m=+72.430067765" watchObservedRunningTime="2025-12-13 13:06:35.837096853 +0000 UTC m=+72.430693549"
	Dec 13 13:06:36 addons-802674 kubelet[1298]: I1213 13:06:36.757442    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q4bmk" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 13:06:38 addons-802674 kubelet[1298]: I1213 13:06:38.776908    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-pqrjc" podStartSLOduration=56.585492051 podStartE2EDuration="1m8.776884754s" podCreationTimestamp="2025-12-13 13:05:30 +0000 UTC" firstStartedPulling="2025-12-13 13:06:26.449357719 +0000 UTC m=+63.042954400" lastFinishedPulling="2025-12-13 13:06:38.640750417 +0000 UTC m=+75.234347103" observedRunningTime="2025-12-13 13:06:38.775981143 +0000 UTC m=+75.369577857" watchObservedRunningTime="2025-12-13 13:06:38.776884754 +0000 UTC m=+75.370481447"
	Dec 13 13:06:41 addons-802674 kubelet[1298]: I1213 13:06:41.792887    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-2rht9" podStartSLOduration=64.982205675 podStartE2EDuration="1m11.792869439s" podCreationTimestamp="2025-12-13 13:05:30 +0000 UTC" firstStartedPulling="2025-12-13 13:06:34.827651434 +0000 UTC m=+71.421248112" lastFinishedPulling="2025-12-13 13:06:41.638315201 +0000 UTC m=+78.231911876" observedRunningTime="2025-12-13 13:06:41.792109597 +0000 UTC m=+78.385706298" watchObservedRunningTime="2025-12-13 13:06:41.792869439 +0000 UTC m=+78.386466114"
	Dec 13 13:06:42 addons-802674 kubelet[1298]: E1213 13:06:42.026755    1298 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 13 13:06:42 addons-802674 kubelet[1298]: E1213 13:06:42.026952    1298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/285d2b4d-1e18-410a-9ac4-2fe91c56bfd2-gcr-creds podName:285d2b4d-1e18-410a-9ac4-2fe91c56bfd2 nodeName:}" failed. No retries permitted until 2025-12-13 13:07:14.026910184 +0000 UTC m=+110.620506867 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/285d2b4d-1e18-410a-9ac4-2fe91c56bfd2-gcr-creds") pod "registry-creds-764b6fb674-vppgx" (UID: "285d2b4d-1e18-410a-9ac4-2fe91c56bfd2") : secret "registry-creds-gcr" not found
	Dec 13 13:06:43 addons-802674 kubelet[1298]: I1213 13:06:43.542221    1298 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 13 13:06:43 addons-802674 kubelet[1298]: I1213 13:06:43.542276    1298 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 13 13:06:44 addons-802674 kubelet[1298]: I1213 13:06:44.911600    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-x58fn" podStartSLOduration=66.289367296 podStartE2EDuration="1m7.911577096s" podCreationTimestamp="2025-12-13 13:05:37 +0000 UTC" firstStartedPulling="2025-12-13 13:06:42.350853455 +0000 UTC m=+78.944450145" lastFinishedPulling="2025-12-13 13:06:43.973063266 +0000 UTC m=+80.566659945" observedRunningTime="2025-12-13 13:06:44.809420306 +0000 UTC m=+81.403017002" watchObservedRunningTime="2025-12-13 13:06:44.911577096 +0000 UTC m=+81.505173798"
	Dec 13 13:06:46 addons-802674 kubelet[1298]: I1213 13:06:46.827745    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-hzzp2" podStartSLOduration=1.098820559 podStartE2EDuration="36.827722496s" podCreationTimestamp="2025-12-13 13:06:10 +0000 UTC" firstStartedPulling="2025-12-13 13:06:10.61294584 +0000 UTC m=+47.206542532" lastFinishedPulling="2025-12-13 13:06:46.341847791 +0000 UTC m=+82.935444469" observedRunningTime="2025-12-13 13:06:46.826590879 +0000 UTC m=+83.420187575" watchObservedRunningTime="2025-12-13 13:06:46.827722496 +0000 UTC m=+83.421319190"
	Dec 13 13:06:51 addons-802674 kubelet[1298]: I1213 13:06:51.597403    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/29062754-a680-492d-be0c-824bf09da2ed-gcp-creds\") pod \"busybox\" (UID: \"29062754-a680-492d-be0c-824bf09da2ed\") " pod="default/busybox"
	Dec 13 13:06:51 addons-802674 kubelet[1298]: I1213 13:06:51.597486    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2pm\" (UniqueName: \"kubernetes.io/projected/29062754-a680-492d-be0c-824bf09da2ed-kube-api-access-mc2pm\") pod \"busybox\" (UID: \"29062754-a680-492d-be0c-824bf09da2ed\") " pod="default/busybox"
	Dec 13 13:06:52 addons-802674 kubelet[1298]: I1213 13:06:52.848474    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.256537684 podStartE2EDuration="1.848451241s" podCreationTimestamp="2025-12-13 13:06:51 +0000 UTC" firstStartedPulling="2025-12-13 13:06:51.849734646 +0000 UTC m=+88.443331321" lastFinishedPulling="2025-12-13 13:06:52.44164819 +0000 UTC m=+89.035244878" observedRunningTime="2025-12-13 13:06:52.847219825 +0000 UTC m=+89.440816520" watchObservedRunningTime="2025-12-13 13:06:52.848451241 +0000 UTC m=+89.442047937"
	Dec 13 13:06:58 addons-802674 kubelet[1298]: E1213 13:06:58.668537    1298 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55670->127.0.0.1:38709: write tcp 127.0.0.1:55670->127.0.0.1:38709: write: broken pipe
	Dec 13 13:06:59 addons-802674 kubelet[1298]: I1213 13:06:59.490379    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b7164c7-9760-4bbc-afed-2e7df24a4377" path="/var/lib/kubelet/pods/6b7164c7-9760-4bbc-afed-2e7df24a4377/volumes"
	
	
	==> storage-provisioner [810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724] <==
	W1213 13:06:34.934908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:36.938232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:36.942689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:38.947095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:38.952938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:40.956676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:40.963980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:42.967523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:42.975231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:44.978541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:44.984644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:46.987112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:46.990585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:48.993336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:49.001996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:51.005276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:51.008865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:53.011844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:53.016033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:55.019369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:55.023369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:57.026457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:57.031511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:59.034648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:06:59.038948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-802674 -n addons-802674
helpers_test.go:270: (dbg) Run:  kubectl --context addons-802674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-hbxtn ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6 registry-creds-764b6fb674-vppgx
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-802674 describe pod gcp-auth-certs-patch-hbxtn ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6 registry-creds-764b6fb674-vppgx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-802674 describe pod gcp-auth-certs-patch-hbxtn ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6 registry-creds-764b6fb674-vppgx: exit status 1 (60.831275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-hbxtn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-4vxk5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kh6b6" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-vppgx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-802674 describe pod gcp-auth-certs-patch-hbxtn ingress-nginx-admission-create-4vxk5 ingress-nginx-admission-patch-kh6b6 registry-creds-764b6fb674-vppgx: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable headlamp --alsologtostderr -v=1: exit status 11 (245.849912ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:01.452348  404837 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:01.452617  404837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:01.452627  404837 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:01.452631  404837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:01.452820  404837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:01.453051  404837 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:01.453398  404837 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:01.453422  404837 addons.go:622] checking whether the cluster is paused
	I1213 13:07:01.453535  404837 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:01.453550  404837 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:01.453942  404837 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:01.472270  404837 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:01.472323  404837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:01.491114  404837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:01.586200  404837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:01.586271  404837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:01.614771  404837 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:01.614807  404837 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:01.614814  404837 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:01.614819  404837 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:01.614823  404837 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:01.614827  404837 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:01.614830  404837 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:01.614832  404837 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:01.614835  404837 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:01.614853  404837 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:01.614856  404837 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:01.614859  404837 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:01.614863  404837 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:01.614869  404837 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:01.614873  404837 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:01.614888  404837 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:01.614897  404837 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:01.614902  404837 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:01.614906  404837 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:01.614911  404837 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:01.614916  404837 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:01.614920  404837 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:01.614924  404837 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:01.614928  404837 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:01.614931  404837 cri.go:89] found id: ""
	I1213 13:07:01.614970  404837 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:01.628972  404837 out.go:203] 
	W1213 13:07:01.630352  404837 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:01.630369  404837 out.go:285] * 
	* 
	W1213 13:07:01.634535  404837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:01.635896  404837 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-fpvjg" [71a25dea-bcb5-49c5-abd5-3b0cebdad5b0] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00300538s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (246.368552ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:22.930823  407129 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:22.930936  407129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:22.930948  407129 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:22.930955  407129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:22.931145  407129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:22.931412  407129 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:22.931754  407129 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:22.931792  407129 addons.go:622] checking whether the cluster is paused
	I1213 13:07:22.931877  407129 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:22.931890  407129 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:22.932231  407129 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:22.950150  407129 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:22.950217  407129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:22.967554  407129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:23.064235  407129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:23.064321  407129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:23.093291  407129 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:07:23.093318  407129 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:23.093325  407129 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:23.093329  407129 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:23.093334  407129 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:23.093339  407129 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:23.093343  407129 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:23.093347  407129 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:23.093351  407129 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:23.093360  407129 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:23.093365  407129 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:23.093368  407129 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:23.093373  407129 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:23.093377  407129 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:23.093382  407129 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:23.093391  407129 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:23.093400  407129 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:23.093406  407129 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:23.093411  407129 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:23.093415  407129 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:23.093426  407129 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:23.093431  407129 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:23.093437  407129 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:23.093442  407129 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:23.093450  407129 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:23.093464  407129 cri.go:89] found id: ""
	I1213 13:07:23.093514  407129 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:23.107484  407129 out.go:203] 
	W1213 13:07:23.108676  407129 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:23.108700  407129 out.go:285] * 
	* 
	W1213 13:07:23.112580  407129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:23.113832  407129 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-802674 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-802674 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [acbd96e2-0132-4609-8149-54250fbfbed2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [acbd96e2-0132-4609-8149-54250fbfbed2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [acbd96e2-0132-4609-8149-54250fbfbed2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003708882s
addons_test.go:969: (dbg) Run:  kubectl --context addons-802674 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 ssh "cat /opt/local-path-provisioner/pvc-c7df13f3-7532-4920-a88e-f3a79a290a56_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-802674 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-802674 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (241.232072ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:27.527060  407448 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:27.527301  407448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:27.527310  407448 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:27.527314  407448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:27.527519  407448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:27.527789  407448 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:27.528125  407448 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:27.528148  407448 addons.go:622] checking whether the cluster is paused
	I1213 13:07:27.528229  407448 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:27.528242  407448 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:27.528587  407448 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:27.546083  407448 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:27.546141  407448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:27.562013  407448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:27.656445  407448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:27.656558  407448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:27.684977  407448 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:07:27.685001  407448 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:27.685005  407448 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:27.685009  407448 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:27.685012  407448 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:27.685015  407448 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:27.685018  407448 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:27.685020  407448 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:27.685023  407448 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:27.685029  407448 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:27.685031  407448 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:27.685035  407448 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:27.685038  407448 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:27.685040  407448 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:27.685043  407448 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:27.685047  407448 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:27.685051  407448 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:27.685056  407448 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:27.685058  407448 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:27.685061  407448 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:27.685064  407448 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:27.685075  407448 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:27.685081  407448 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:27.685084  407448 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:27.685087  407448 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:27.685089  407448 cri.go:89] found id: ""
	I1213 13:07:27.685129  407448 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:27.698420  407448 out.go:203] 
	W1213 13:07:27.699745  407448 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:27.699770  407448 out.go:285] * 
	* 
	W1213 13:07:27.704063  407448 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:27.705844  407448 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-bldsd" [58d6cc59-4315-40ba-b95c-caaeeea9ef12] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003573661s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (249.718806ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:12.408850  406332 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:12.408940  406332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:12.408944  406332 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:12.408948  406332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:12.409133  406332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:12.409430  406332 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:12.409729  406332 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:12.409749  406332 addons.go:622] checking whether the cluster is paused
	I1213 13:07:12.409848  406332 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:12.409861  406332 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:12.410223  406332 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:12.428973  406332 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:12.429036  406332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:12.446605  406332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:12.544390  406332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:12.544470  406332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:12.573359  406332 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:12.573382  406332 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:12.573386  406332 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:12.573393  406332 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:12.573399  406332 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:12.573405  406332 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:12.573411  406332 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:12.573416  406332 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:12.573422  406332 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:12.573435  406332 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:12.573447  406332 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:12.573453  406332 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:12.573463  406332 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:12.573470  406332 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:12.573479  406332 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:12.573494  406332 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:12.573504  406332 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:12.573512  406332 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:12.573518  406332 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:12.573523  406332 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:12.573540  406332 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:12.573549  406332 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:12.573555  406332 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:12.573564  406332 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:12.573569  406332 cri.go:89] found id: ""
	I1213 13:07:12.573631  406332 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:12.588115  406332 out.go:203] 
	W1213 13:07:12.589296  406332 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:12.589316  406332 out.go:285] * 
	* 
	W1213 13:07:12.593501  406332 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:12.594726  406332 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-l5tbt" [1a623f84-d568-4fa6-b399-0f904abd58a9] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003619597s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable yakd --alsologtostderr -v=1: exit status 11 (258.969142ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:17.662549  406758 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:17.662666  406758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:17.662678  406758 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:17.662685  406758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:17.662999  406758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:17.663325  406758 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:17.663746  406758 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:17.663788  406758 addons.go:622] checking whether the cluster is paused
	I1213 13:07:17.663908  406758 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:17.663926  406758 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:17.664304  406758 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:17.683078  406758 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:17.683149  406758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:17.700748  406758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:17.799449  406758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:17.799529  406758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:17.831017  406758 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:07:17.831040  406758 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:17.831047  406758 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:17.831052  406758 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:17.831057  406758 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:17.831063  406758 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:17.831068  406758 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:17.831074  406758 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:17.831079  406758 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:17.831088  406758 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:17.831094  406758 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:17.831100  406758 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:17.831107  406758 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:17.831113  406758 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:17.831121  406758 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:17.831136  406758 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:17.831146  406758 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:17.831153  406758 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:17.831158  406758 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:17.831164  406758 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:17.831174  406758 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:17.831184  406758 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:17.831190  406758 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:17.831196  406758 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:17.831203  406758 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:17.831212  406758 cri.go:89] found id: ""
	I1213 13:07:17.831263  406758 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:17.847263  406758 out.go:203] 
	W1213 13:07:17.849570  406758 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:17.849681  406758 out.go:285] * 
	* 
	W1213 13:07:17.857171  406758 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:17.858409  406758 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-jrjdp" [80dc3d87-78c3-4beb-8541-e2a6cf003f4e] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003611342s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-802674 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-802674 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (242.706463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:07:17.416324  406699 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:07:17.416592  406699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:17.416610  406699 out.go:374] Setting ErrFile to fd 2...
	I1213 13:07:17.416617  406699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:07:17.416816  406699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:07:17.417049  406699 mustload.go:66] Loading cluster: addons-802674
	I1213 13:07:17.417359  406699 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:17.417379  406699 addons.go:622] checking whether the cluster is paused
	I1213 13:07:17.417456  406699 config.go:182] Loaded profile config "addons-802674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:07:17.417468  406699 host.go:66] Checking if "addons-802674" exists ...
	I1213 13:07:17.417860  406699 cli_runner.go:164] Run: docker container inspect addons-802674 --format={{.State.Status}}
	I1213 13:07:17.436221  406699 ssh_runner.go:195] Run: systemctl --version
	I1213 13:07:17.436271  406699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-802674
	I1213 13:07:17.453519  406699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/addons-802674/id_rsa Username:docker}
	I1213 13:07:17.549398  406699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:07:17.549483  406699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:07:17.578649  406699 cri.go:89] found id: "f7c4229a576fe07cf0919814dfee6c0705b49c93f6835f46215361a77c4c55ac"
	I1213 13:07:17.578678  406699 cri.go:89] found id: "efa46cf269b564b4844602a1d159fe37ab66ca5f6b418f189ee827b9dba093c8"
	I1213 13:07:17.578682  406699 cri.go:89] found id: "ae277236625835803563b5d3709c95c1715a58bb7565d8ec6086941d0839195e"
	I1213 13:07:17.578686  406699 cri.go:89] found id: "25c2ccc8d56eb50465c9572ac4b69e4c56f4fc5934e450f09c18265ee0577194"
	I1213 13:07:17.578689  406699 cri.go:89] found id: "00b38c263e00072dd7d50a33875d10fc536592b2a1d6e234346711aac7cbbec0"
	I1213 13:07:17.578695  406699 cri.go:89] found id: "6df323a2878def1aae2a14ca9c2ad038546721c6ae36b6f316f313176188b46c"
	I1213 13:07:17.578698  406699 cri.go:89] found id: "c5db025aa30e9cd2c67c81ec6bc3c8ea9785b55f88c26b2645dcdbd948a7de0d"
	I1213 13:07:17.578701  406699 cri.go:89] found id: "263b6770119de12b6f6ae321a34d15fe0c91d69ef191dfcd91463e142f87e2d3"
	I1213 13:07:17.578703  406699 cri.go:89] found id: "f08ae0fc41016ef54669e28ca43b05abcc99b07e53b91caaf3b697ef447ee88d"
	I1213 13:07:17.578715  406699 cri.go:89] found id: "6d85d43816c0e9c27cdb9a0406519758a7c04501507c2a84cdacf18ed0bfe19f"
	I1213 13:07:17.578720  406699 cri.go:89] found id: "40aee451d49aa718e9b9b630dcc767fa2e58079b1b3f9728f0c44aa6c3b5c7e5"
	I1213 13:07:17.578724  406699 cri.go:89] found id: "7f147ccf5e501405b11f6c314e4bfd0d7c26b4a6bf64001ba70bbe56a38b0504"
	I1213 13:07:17.578729  406699 cri.go:89] found id: "a9051d728dbfaaa93fa17ffd17029974b838f52c264e001035d0dcb21ffd793a"
	I1213 13:07:17.578734  406699 cri.go:89] found id: "fc7d97af030f51f4603abd265b93269845365378e8e8c119222bafedc7cc4351"
	I1213 13:07:17.578738  406699 cri.go:89] found id: "f4ac5ed0bb71af6a3a22c2384168e3c4e9e23c1de940ae834d03068e9fea08ee"
	I1213 13:07:17.578748  406699 cri.go:89] found id: "bb2165f7660fc2ba491c4871263b79975a85cb6def2d2a4f73eca8a2dd7d8f07"
	I1213 13:07:17.578754  406699 cri.go:89] found id: "be21f9e65e565f792b744333cf27c95ecfa408d73cd2a551f0c5c7f265a293e3"
	I1213 13:07:17.578758  406699 cri.go:89] found id: "810cfaaa4b7814b312b2db787f8ec029d4e832cc7e12034fb85045552bd3f724"
	I1213 13:07:17.578761  406699 cri.go:89] found id: "5eca19a8b70c2a0e9d976b959fbf7d7aa4c7ee8009fb16d38e7b5f5c02b8cce6"
	I1213 13:07:17.578764  406699 cri.go:89] found id: "d50cb67d5dec7ec3f682549ab14b880502935a667c57f8d8cdb0c463515a22e6"
	I1213 13:07:17.578767  406699 cri.go:89] found id: "b6315f71701be89e474fba173cf05ee0075e34674512768e6df77a3cc4cd9523"
	I1213 13:07:17.578769  406699 cri.go:89] found id: "610b806094f3861cda2f55f3c5ae8348739fd03173056cb05f1e55d0f129881d"
	I1213 13:07:17.578772  406699 cri.go:89] found id: "2a7f427a075b6ebada9bc037f76c3a7326d7c26ef26054dd05f59dd7a696441e"
	I1213 13:07:17.578788  406699 cri.go:89] found id: "9b7e546540c7cea0b7f684aeaa74db9dca87eb76f77d77d8121f3927b1239ae2"
	I1213 13:07:17.578793  406699 cri.go:89] found id: "dba035f34dd51a8cd71b4f0ae554035ac03076228fce0be93b5b35ef0ca0e069"
	I1213 13:07:17.578797  406699 cri.go:89] found id: ""
	I1213 13:07:17.578857  406699 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:07:17.593161  406699 out.go:203] 
	W1213 13:07:17.594550  406699 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:07:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:07:17.594575  406699 out.go:285] * 
	* 
	W1213 13:07:17.598759  406699 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:07:17.600060  406699 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-802674 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 image ls --format short --alsologtostderr: (2.283757508s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-018090 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-018090 image ls --format short --alsologtostderr:
I1213 13:13:30.520112  432749 out.go:360] Setting OutFile to fd 1 ...
I1213 13:13:30.520414  432749 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:30.520428  432749 out.go:374] Setting ErrFile to fd 2...
I1213 13:13:30.520434  432749 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:30.520713  432749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:13:30.521562  432749 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:30.521711  432749 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:30.522400  432749 cli_runner.go:164] Run: docker container inspect functional-018090 --format={{.State.Status}}
I1213 13:13:30.548430  432749 ssh_runner.go:195] Run: systemctl --version
I1213 13:13:30.548521  432749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-018090
I1213 13:13:30.573201  432749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-018090/id_rsa Username:docker}
I1213 13:13:30.680631  432749 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 13:13:32.711493  432749 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.030821161s)
W1213 13:13:32.711592  432749 cache_images.go:736] Failed to list images for profile functional-018090 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1213 13:13:32.709053    7100 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-12-13T13:13:32Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 image ls --format table --alsologtostderr: (2.278363545s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-018090 image ls --format table --alsologtostderr:
┌───────┬─────┬──────────┬──────┐
│ IMAGE │ TAG │ IMAGE ID │ SIZE │
└───────┴─────┴──────────┴──────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-018090 image ls --format table --alsologtostderr:
I1213 13:13:33.172790  433545 out.go:360] Setting OutFile to fd 1 ...
I1213 13:13:33.173098  433545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:33.173106  433545 out.go:374] Setting ErrFile to fd 2...
I1213 13:13:33.173112  433545 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:33.173384  433545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:13:33.174201  433545 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:33.174356  433545 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:33.175035  433545 cli_runner.go:164] Run: docker container inspect functional-018090 --format={{.State.Status}}
I1213 13:13:33.201030  433545 ssh_runner.go:195] Run: systemctl --version
I1213 13:13:33.201087  433545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-018090
I1213 13:13:33.223541  433545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-018090/id_rsa Username:docker}
I1213 13:13:33.331565  433545 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 13:13:35.365919  433545 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.034319525s)
W1213 13:13:35.366003  433545 cache_images.go:736] Failed to list images for profile functional-018090 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1213 13:13:35.362268    7345 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-12-13T13:13:35Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected │ registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (5.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-728225
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image load --daemon kicbase/echo-server:functional-728225 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-728225 image load --daemon kicbase/echo-server:functional-728225 --alsologtostderr: (2.575654557s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-728225 image ls: (2.262661656s)
functional_test.go:461: expected "kicbase/echo-server:functional-728225" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (5.26s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-664471 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-664471 --output=json --user=testUser: exit status 80 (1.53198374s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e075224e-d524-485e-92e1-2ec15c1488b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-664471 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"af7b30b9-963f-487f-b98f-f23c4d7e6f5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T13:25:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"f5a3209d-3760-48b3-a49e-36fc29206c21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-664471 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-664471 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-664471 --output=json --user=testUser: exit status 80 (1.744410467s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"edf30923-6213-49e6-9148-da0a425e6de7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-664471 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"eec2e2ae-1ed8-45c8-84e1-c6ba158254e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T13:25:13Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"824872d4-aec9-4594-8919-303722aab920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-664471 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.74s)

                                                
                                    
x
+
TestPause/serial/Pause (6.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-484783 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-484783 --alsologtostderr -v=5: exit status 80 (2.738520897s)

                                                
                                                
-- stdout --
	* Pausing node pause-484783 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:38:16.249913  596898 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:38:16.250026  596898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:38:16.250035  596898 out.go:374] Setting ErrFile to fd 2...
	I1213 13:38:16.250039  596898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:38:16.250220  596898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:38:16.250440  596898 out.go:368] Setting JSON to false
	I1213 13:38:16.250460  596898 mustload.go:66] Loading cluster: pause-484783
	I1213 13:38:16.250942  596898 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:16.251319  596898 cli_runner.go:164] Run: docker container inspect pause-484783 --format={{.State.Status}}
	I1213 13:38:16.270362  596898 host.go:66] Checking if "pause-484783" exists ...
	I1213 13:38:16.270615  596898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:38:16.326347  596898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-13 13:38:16.316610314 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:38:16.327013  596898 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-484783 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 13:38:16.495750  596898 out.go:179] * Pausing node pause-484783 ... 
	I1213 13:38:16.545237  596898 host.go:66] Checking if "pause-484783" exists ...
	I1213 13:38:16.545594  596898 ssh_runner.go:195] Run: systemctl --version
	I1213 13:38:16.545644  596898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:16.563925  596898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:16.660995  596898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:38:16.674353  596898 pause.go:52] kubelet running: true
	I1213 13:38:16.674420  596898 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:38:16.808724  596898 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:38:16.808828  596898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:38:16.880152  596898 cri.go:89] found id: "b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14"
	I1213 13:38:16.880183  596898 cri.go:89] found id: "b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17"
	I1213 13:38:16.880190  596898 cri.go:89] found id: "a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88"
	I1213 13:38:16.880195  596898 cri.go:89] found id: "ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01"
	I1213 13:38:16.880200  596898 cri.go:89] found id: "9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e"
	I1213 13:38:16.880205  596898 cri.go:89] found id: "d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195"
	I1213 13:38:16.880210  596898 cri.go:89] found id: "15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac"
	I1213 13:38:16.880214  596898 cri.go:89] found id: "199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88"
	I1213 13:38:16.880218  596898 cri.go:89] found id: ""
	I1213 13:38:16.880262  596898 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:38:16.892548  596898 retry.go:31] will retry after 195.643099ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:16Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:38:17.089036  596898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:38:17.104147  596898 pause.go:52] kubelet running: false
	I1213 13:38:17.104208  596898 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:38:17.249105  596898 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:38:17.249191  596898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:38:17.331003  596898 cri.go:89] found id: "b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14"
	I1213 13:38:17.331026  596898 cri.go:89] found id: "b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17"
	I1213 13:38:17.331032  596898 cri.go:89] found id: "a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88"
	I1213 13:38:17.331037  596898 cri.go:89] found id: "ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01"
	I1213 13:38:17.331042  596898 cri.go:89] found id: "9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e"
	I1213 13:38:17.331055  596898 cri.go:89] found id: "d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195"
	I1213 13:38:17.331060  596898 cri.go:89] found id: "15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac"
	I1213 13:38:17.331064  596898 cri.go:89] found id: "199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88"
	I1213 13:38:17.331069  596898 cri.go:89] found id: ""
	I1213 13:38:17.331118  596898 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:38:17.344173  596898 retry.go:31] will retry after 271.144464ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:17Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:38:17.615652  596898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:38:17.631553  596898 pause.go:52] kubelet running: false
	I1213 13:38:17.631634  596898 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:38:17.781347  596898 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:38:17.781442  596898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:38:17.874017  596898 cri.go:89] found id: "b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14"
	I1213 13:38:17.874043  596898 cri.go:89] found id: "b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17"
	I1213 13:38:17.874050  596898 cri.go:89] found id: "a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88"
	I1213 13:38:17.874055  596898 cri.go:89] found id: "ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01"
	I1213 13:38:17.874059  596898 cri.go:89] found id: "9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e"
	I1213 13:38:17.874064  596898 cri.go:89] found id: "d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195"
	I1213 13:38:17.874069  596898 cri.go:89] found id: "15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac"
	I1213 13:38:17.874074  596898 cri.go:89] found id: "199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88"
	I1213 13:38:17.874078  596898 cri.go:89] found id: ""
	I1213 13:38:17.874125  596898 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:38:17.890514  596898 retry.go:31] will retry after 733.065263ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:17Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:38:18.623998  596898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:38:18.641182  596898 pause.go:52] kubelet running: false
	I1213 13:38:18.641252  596898 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:38:18.817413  596898 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:38:18.817502  596898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:38:18.899340  596898 cri.go:89] found id: "b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14"
	I1213 13:38:18.899368  596898 cri.go:89] found id: "b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17"
	I1213 13:38:18.899375  596898 cri.go:89] found id: "a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88"
	I1213 13:38:18.899381  596898 cri.go:89] found id: "ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01"
	I1213 13:38:18.899385  596898 cri.go:89] found id: "9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e"
	I1213 13:38:18.899391  596898 cri.go:89] found id: "d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195"
	I1213 13:38:18.899395  596898 cri.go:89] found id: "15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac"
	I1213 13:38:18.899399  596898 cri.go:89] found id: "199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88"
	I1213 13:38:18.899410  596898 cri.go:89] found id: ""
	I1213 13:38:18.899483  596898 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:38:18.915504  596898 out.go:203] 
	W1213 13:38:18.916588  596898 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:38:18.916614  596898 out.go:285] * 
	* 
	W1213 13:38:18.923220  596898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:38:18.924373  596898 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-484783 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-484783
helpers_test.go:244: (dbg) docker inspect pause-484783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9",
	        "Created": "2025-12-13T13:36:59.032097531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 580902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:36:59.086678769Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/hosts",
	        "LogPath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9-json.log",
	        "Name": "/pause-484783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-484783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-484783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9",
	                "LowerDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-484783",
	                "Source": "/var/lib/docker/volumes/pause-484783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-484783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-484783",
	                "name.minikube.sigs.k8s.io": "pause-484783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "94b49d9ab8531277e0a2021e8bd4b09d1ac367d64ff929bcf8a10ba58ee5050f",
	            "SandboxKey": "/var/run/docker/netns/94b49d9ab853",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-484783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa75236e9aa704210586687d6c0989e3467da5e3e0be0c2abcfcc1acaecc8c9b",
	                    "EndpointID": "09b410d4a8b13e2e415fd009c05f0223bbfae9a2e7c9b86c70c075866c338f00",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "e6:5f:43:8d:f9:8e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-484783",
	                        "0642b40c1d8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-484783 -n pause-484783
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-484783 -n pause-484783: exit status 2 (362.212318ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-484783 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-484783 logs -n 25: (1.086899136s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-345214 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --cancel-scheduled                                                                                 │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │ 13 Dec 25 13:35 UTC │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │ 13 Dec 25 13:36 UTC │
	│ delete  │ -p scheduled-stop-345214                                                                                                    │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:36 UTC │
	│ start   │ -p insufficient-storage-215505 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-215505 │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │                     │
	│ delete  │ -p insufficient-storage-215505                                                                                              │ insufficient-storage-215505 │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:36 UTC │
	│ start   │ -p offline-crio-444562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-444562         │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p cert-expiration-541985 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-541985      │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:37 UTC │
	│ start   │ -p pause-484783 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-484783                │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p force-systemd-env-488734 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-488734    │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:37 UTC │
	│ delete  │ -p force-systemd-env-488734                                                                                                 │ force-systemd-env-488734    │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ start   │ -p force-systemd-flag-212830 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-212830   │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ ssh     │ force-systemd-flag-212830 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-212830   │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ delete  │ -p force-systemd-flag-212830                                                                                                │ force-systemd-flag-212830   │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ start   │ -p stopped-upgrade-627277 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-627277      │ jenkins │ v1.35.0 │ 13 Dec 25 13:37 UTC │                     │
	│ delete  │ -p offline-crio-444562                                                                                                      │ offline-crio-444562         │ jenkins │ v1.37.0 │ 13 Dec 25 13:38 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p pause-484783 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-484783                │ jenkins │ v1.37.0 │ 13 Dec 25 13:38 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p missing-upgrade-533439 --memory=3072 --driver=docker  --container-runtime=crio                                           │ missing-upgrade-533439      │ jenkins │ v1.35.0 │ 13 Dec 25 13:38 UTC │                     │
	│ pause   │ -p pause-484783 --alsologtostderr -v=5                                                                                      │ pause-484783                │ jenkins │ v1.37.0 │ 13 Dec 25 13:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:38:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:38:11.654356  595557 out.go:345] Setting OutFile to fd 1 ...
	I1213 13:38:11.654440  595557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 13:38:11.654443  595557 out.go:358] Setting ErrFile to fd 2...
	I1213 13:38:11.654447  595557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 13:38:11.654657  595557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:38:11.655151  595557 out.go:352] Setting JSON to false
	I1213 13:38:11.656579  595557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8440,"bootTime":1765624652,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:38:11.656706  595557 start.go:139] virtualization: kvm guest
	I1213 13:38:11.658842  595557 out.go:177] * [missing-upgrade-533439] minikube v1.35.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:38:11.660280  595557 notify.go:220] Checking for updates...
	I1213 13:38:11.660297  595557 out.go:177]   - MINIKUBE_LOCATION=22122
	I1213 13:38:11.661447  595557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:38:11.663487  595557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:38:11.664651  595557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:38:11.665747  595557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:38:11.667669  595557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:38:11.669108  595557 config.go:182] Loaded profile config "cert-expiration-541985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:11.669234  595557 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:11.669329  595557 config.go:182] Loaded profile config "stopped-upgrade-627277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:38:11.669419  595557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 13:38:11.696444  595557 docker.go:123] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:38:11.696534  595557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:38:11.775244  595557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:38:11.763905703 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:38:11.775400  595557 docker.go:318] overlay module found
	I1213 13:38:11.780993  595557 out.go:177] * Using the docker driver based on user configuration
	I1213 13:38:11.782112  595557 start.go:297] selected driver: docker
	I1213 13:38:11.782120  595557 start.go:901] validating driver "docker" against <nil>
	I1213 13:38:11.782130  595557 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:38:11.782869  595557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:38:11.852500  595557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:38:11.839894256 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:38:11.852745  595557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 13:38:11.853083  595557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:38:11.854753  595557 out.go:177] * Using Docker driver with root privileges
	I1213 13:38:11.855821  595557 cni.go:84] Creating CNI manager for ""
	I1213 13:38:11.855896  595557 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:38:11.855906  595557 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:38:11.855970  595557 start.go:340] cluster config:
	{Name:missing-upgrade-533439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-533439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:38:11.858848  595557 out.go:177] * Starting "missing-upgrade-533439" primary control-plane node in "missing-upgrade-533439" cluster
	I1213 13:38:11.892200  595557 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 13:38:11.913601  595557 out.go:177] * Pulling base image v0.0.46 ...
	I1213 13:38:11.999006  595557 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1213 13:38:11.999024  595557 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 13:38:11.999089  595557 preload.go:146] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:38:11.999097  595557 cache.go:56] Caching tarball of preloaded images
	I1213 13:38:11.999204  595557 preload.go:172] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:38:11.999209  595557 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1213 13:38:11.999306  595557 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/missing-upgrade-533439/config.json ...
	I1213 13:38:11.999321  595557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/missing-upgrade-533439/config.json: {Name:mk3aa711e2dc47d91f713899ca00e0889aacdb3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:12.021113  595557 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1213 13:38:12.021134  595557 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1213 13:38:12.021156  595557 cache.go:227] Successfully downloaded all kic artifacts
	I1213 13:38:12.021188  595557 start.go:360] acquireMachinesLock for missing-upgrade-533439: {Name:mk421c651e43a49875cb2c7dbe4365c6871bf96b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:38:12.021297  595557 start.go:364] duration metric: took 91.843µs to acquireMachinesLock for "missing-upgrade-533439"
	I1213 13:38:12.021321  595557 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-533439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-533439 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:38:12.021493  595557 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:38:09.181918  593655 out.go:252] * Updating the running docker "pause-484783" container ...
	I1213 13:38:09.181963  593655 machine.go:94] provisionDockerMachine start ...
	I1213 13:38:09.182041  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.204672  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:09.204998  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:09.205022  593655 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:38:09.344424  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484783
	
	I1213 13:38:09.344452  593655 ubuntu.go:182] provisioning hostname "pause-484783"
	I1213 13:38:09.344517  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.369850  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:09.370318  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:09.370341  593655 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-484783 && echo "pause-484783" | sudo tee /etc/hostname
	I1213 13:38:09.523913  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484783
	
	I1213 13:38:09.524001  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.542589  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:09.542919  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:09.542952  593655 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-484783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-484783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-484783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:38:09.680974  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:38:09.681009  593655 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:38:09.681054  593655 ubuntu.go:190] setting up certificates
	I1213 13:38:09.681066  593655 provision.go:84] configureAuth start
	I1213 13:38:09.681137  593655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484783
	I1213 13:38:09.700218  593655 provision.go:143] copyHostCerts
	I1213 13:38:09.700295  593655 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:38:09.700318  593655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:38:09.700408  593655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:38:09.700614  593655 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:38:09.700630  593655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:38:09.700684  593655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:38:09.700794  593655 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:38:09.700808  593655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:38:09.700854  593655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:38:09.700916  593655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.pause-484783 san=[127.0.0.1 192.168.103.2 localhost minikube pause-484783]
	I1213 13:38:09.807321  593655 provision.go:177] copyRemoteCerts
	I1213 13:38:09.807383  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:38:09.807437  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.825349  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:09.924112  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:38:09.944559  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:38:09.965328  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 13:38:09.982546  593655 provision.go:87] duration metric: took 301.455403ms to configureAuth
	I1213 13:38:09.982573  593655 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:38:09.982763  593655 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:09.982899  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.002279  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:10.002492  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:10.002509  593655 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:38:10.363493  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:38:10.363523  593655 machine.go:97] duration metric: took 1.181550965s to provisionDockerMachine
	I1213 13:38:10.363536  593655 start.go:293] postStartSetup for "pause-484783" (driver="docker")
	I1213 13:38:10.363549  593655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:38:10.363614  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:38:10.363663  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.384481  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.485108  593655 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:38:10.488757  593655 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:38:10.488809  593655 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:38:10.488823  593655 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:38:10.488897  593655 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:38:10.489006  593655 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:38:10.489132  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:38:10.497109  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:10.514822  593655 start.go:296] duration metric: took 151.269689ms for postStartSetup
	I1213 13:38:10.514901  593655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:38:10.514962  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.535952  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.632717  593655 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:38:10.637958  593655 fix.go:56] duration metric: took 1.477019951s for fixHost
	I1213 13:38:10.637983  593655 start.go:83] releasing machines lock for "pause-484783", held for 1.477066377s
	I1213 13:38:10.638063  593655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484783
	I1213 13:38:10.657527  593655 ssh_runner.go:195] Run: cat /version.json
	I1213 13:38:10.657593  593655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:38:10.657675  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.657595  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.676461  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.677531  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.837761  593655 ssh_runner.go:195] Run: systemctl --version
	I1213 13:38:10.845399  593655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:38:10.887003  593655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:38:10.892313  593655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:38:10.892374  593655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:10.900652  593655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:38:10.900671  593655 start.go:496] detecting cgroup driver to use...
	I1213 13:38:10.900702  593655 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:38:10.900747  593655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:38:10.917085  593655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:38:10.929109  593655 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:38:10.929166  593655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:38:10.944563  593655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:38:10.957088  593655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:38:11.083351  593655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:38:11.231951  593655 docker.go:234] disabling docker service ...
	I1213 13:38:11.232016  593655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:38:11.254423  593655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:38:11.269173  593655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:38:11.429628  593655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:38:11.559359  593655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:38:11.575193  593655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:38:11.593474  593655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:38:11.593546  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.605405  593655 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:38:11.605468  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.615495  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.625506  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.635853  593655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:38:11.645891  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.656536  593655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.665674  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.675304  593655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:38:11.685103  593655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:38:11.694657  593655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:11.842605  593655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:38:12.323586  593655 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:38:12.323679  593655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:38:12.328068  593655 start.go:564] Will wait 60s for crictl version
	I1213 13:38:12.328136  593655 ssh_runner.go:195] Run: which crictl
	I1213 13:38:12.331951  593655 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:38:12.357193  593655 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:38:12.357275  593655 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.393647  593655 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.432169  593655 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 13:38:10.472074  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:38:10.504122  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 13:38:10.529366  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:38:10.555606  592646 provision.go:87] duration metric: took 348.215836ms to configureAuth
	I1213 13:38:10.555632  592646 ubuntu.go:193] setting minikube options for container-runtime
	I1213 13:38:10.555879  592646 config.go:182] Loaded profile config "stopped-upgrade-627277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:38:10.556045  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:10.574975  592646 main.go:141] libmachine: Using SSH client type: native
	I1213 13:38:10.575189  592646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33381 <nil> <nil>}
	I1213 13:38:10.575204  592646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:38:10.835543  592646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:38:10.835565  592646 machine.go:96] duration metric: took 1.092736866s to provisionDockerMachine
	I1213 13:38:10.835580  592646 client.go:171] duration metric: took 8.717598011s to LocalClient.Create
	I1213 13:38:10.835611  592646 start.go:167] duration metric: took 8.717667213s to libmachine.API.Create "stopped-upgrade-627277"
	I1213 13:38:10.835645  592646 start.go:293] postStartSetup for "stopped-upgrade-627277" (driver="docker")
	I1213 13:38:10.835659  592646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:38:10.835736  592646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:38:10.835831  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:10.857358  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:10.952999  592646 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:38:10.956394  592646 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:38:10.956416  592646 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 13:38:10.956429  592646 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 13:38:10.956435  592646 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 13:38:10.956446  592646 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:38:10.956498  592646 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:38:10.956564  592646 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:38:10.956651  592646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:38:10.965449  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:10.999475  592646 start.go:296] duration metric: took 163.812753ms for postStartSetup
	I1213 13:38:10.999990  592646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-627277
	I1213 13:38:11.030043  592646 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/config.json ...
	I1213 13:38:11.030289  592646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:38:11.030325  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:11.048127  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:11.140904  592646 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:38:11.146113  592646 start.go:128] duration metric: took 9.029929974s to createHost
	I1213 13:38:11.146131  592646 start.go:83] releasing machines lock for "stopped-upgrade-627277", held for 9.030049138s
	I1213 13:38:11.146200  592646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-627277
	I1213 13:38:11.169300  592646 ssh_runner.go:195] Run: cat /version.json
	I1213 13:38:11.169353  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:11.169412  592646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:38:11.169515  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:11.191531  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:11.191873  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:11.282888  592646 ssh_runner.go:195] Run: systemctl --version
	I1213 13:38:11.385899  592646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:38:11.552365  592646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:38:11.559333  592646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:11.589086  592646 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 13:38:11.589163  592646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:11.625106  592646 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 13:38:11.625124  592646 start.go:495] detecting cgroup driver to use...
	I1213 13:38:11.625159  592646 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:38:11.625211  592646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:38:11.642234  592646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:38:11.655864  592646 docker.go:217] disabling cri-docker service (if available) ...
	I1213 13:38:11.655945  592646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:38:11.674167  592646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:38:11.694365  592646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:38:11.787494  592646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:38:11.881328  592646 docker.go:233] disabling docker service ...
	I1213 13:38:11.881377  592646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:38:11.899757  592646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:38:11.912516  592646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:38:12.075668  592646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:38:12.306212  592646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:38:12.323703  592646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:38:12.341408  592646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 13:38:12.341476  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.357829  592646 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:38:12.357903  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.369634  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.382167  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.395387  592646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:38:12.407028  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.419575  592646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.437282  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.449552  592646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:38:12.459306  592646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:38:12.468360  592646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:12.553600  592646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:38:12.681007  592646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:38:12.681171  592646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:38:12.685685  592646 start.go:563] Will wait 60s for crictl version
	I1213 13:38:12.685740  592646 ssh_runner.go:195] Run: which crictl
	I1213 13:38:12.689825  592646 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:38:12.738702  592646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 13:38:12.738800  592646 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.776313  592646 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.829611  592646 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1213 13:38:12.433366  593655 cli_runner.go:164] Run: docker network inspect pause-484783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:38:12.452312  593655 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1213 13:38:12.456935  593655 kubeadm.go:884] updating cluster {Name:pause-484783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:38:12.457208  593655 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:38:12.457276  593655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.497444  593655 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.497478  593655 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:38:12.497545  593655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.535170  593655 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.535196  593655 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:38:12.535204  593655 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1213 13:38:12.535312  593655 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-484783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-484783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:38:12.535383  593655 ssh_runner.go:195] Run: crio config
	I1213 13:38:12.590097  593655 cni.go:84] Creating CNI manager for ""
	I1213 13:38:12.590118  593655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:38:12.590136  593655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:38:12.590161  593655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-484783 NodeName:pause-484783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:38:12.590310  593655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-484783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:38:12.590386  593655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:38:12.600067  593655 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:38:12.600144  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:38:12.608954  593655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 13:38:12.629091  593655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:38:12.643583  593655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1213 13:38:12.658522  593655 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:38:12.663654  593655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:12.789009  593655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:38:12.807145  593655 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783 for IP: 192.168.103.2
	I1213 13:38:12.807166  593655 certs.go:195] generating shared ca certs ...
	I1213 13:38:12.807183  593655 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:12.807370  593655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:38:12.807407  593655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:38:12.807417  593655 certs.go:257] generating profile certs ...
	I1213 13:38:12.807675  593655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.key
	I1213 13:38:12.807747  593655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/apiserver.key.8b604a96
	I1213 13:38:12.807815  593655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/proxy-client.key
	I1213 13:38:12.808022  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:38:12.808061  593655 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:38:12.808075  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:38:12.808098  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:38:12.808126  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:38:12.808145  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:38:12.808189  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:12.808972  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:38:12.832132  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:38:12.852770  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:38:12.872982  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:38:12.892808  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 13:38:12.914520  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:38:12.934505  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:38:12.953400  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 13:38:12.972581  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:38:12.993724  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:38:13.012062  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:38:13.032418  593655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:38:13.047037  593655 ssh_runner.go:195] Run: openssl version
	I1213 13:38:13.053917  593655 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.063099  593655 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:38:13.071453  593655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.076510  593655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.076584  593655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.126141  593655 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:38:13.134735  593655 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.142828  593655 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:38:13.151894  593655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.156075  593655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.156137  593655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.202944  593655 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:38:13.211144  593655 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.219943  593655 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:38:13.230304  593655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.234753  593655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.234843  593655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.283842  593655 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:38:13.293053  593655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:38:13.297757  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:38:13.333861  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:38:13.371271  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:38:13.418018  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:38:13.453976  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:38:13.491408  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:38:13.527945  593655 kubeadm.go:401] StartCluster: {Name:pause-484783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:38:13.528063  593655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:38:13.528117  593655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:38:13.560008  593655 cri.go:89] found id: "b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14"
	I1213 13:38:13.560032  593655 cri.go:89] found id: "b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17"
	I1213 13:38:13.560038  593655 cri.go:89] found id: "a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88"
	I1213 13:38:13.560043  593655 cri.go:89] found id: "ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01"
	I1213 13:38:13.560047  593655 cri.go:89] found id: "9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e"
	I1213 13:38:13.560052  593655 cri.go:89] found id: "d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195"
	I1213 13:38:13.560057  593655 cri.go:89] found id: "15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac"
	I1213 13:38:13.560061  593655 cri.go:89] found id: "199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88"
	I1213 13:38:13.560068  593655 cri.go:89] found id: ""
	I1213 13:38:13.560123  593655 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:38:13.573838  593655 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:13Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:38:13.573910  593655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:38:13.583183  593655 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:38:13.583203  593655 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:38:13.583255  593655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:38:13.592821  593655 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:38:13.593571  593655 kubeconfig.go:125] found "pause-484783" server: "https://192.168.103.2:8443"
	I1213 13:38:13.594531  593655 kapi.go:59] client config for pause-484783: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.key", CAFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 13:38:13.595100  593655 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 13:38:13.595121  593655 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 13:38:13.595126  593655 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 13:38:13.595131  593655 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 13:38:13.595135  593655 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 13:38:13.595546  593655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:38:13.603864  593655 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1213 13:38:13.603903  593655 kubeadm.go:602] duration metric: took 20.692602ms to restartPrimaryControlPlane
	I1213 13:38:13.603915  593655 kubeadm.go:403] duration metric: took 75.983303ms to StartCluster
	I1213 13:38:13.603935  593655 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.604013  593655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:38:13.605106  593655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.605332  593655 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:38:13.605399  593655 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:38:13.605643  593655 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:13.615945  593655 out.go:179] * Verifying Kubernetes components...
	I1213 13:38:13.615951  593655 out.go:179] * Enabled addons: 
	I1213 13:38:13.617296  593655 addons.go:530] duration metric: took 11.90665ms for enable addons: enabled=[]
	I1213 13:38:13.617340  593655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:13.747520  593655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:38:13.761638  593655 node_ready.go:35] waiting up to 6m0s for node "pause-484783" to be "Ready" ...
	I1213 13:38:13.771765  593655 node_ready.go:49] node "pause-484783" is "Ready"
	I1213 13:38:13.771827  593655 node_ready.go:38] duration metric: took 10.136223ms for node "pause-484783" to be "Ready" ...
	I1213 13:38:13.771843  593655 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:38:13.771899  593655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:38:13.783651  593655 api_server.go:72] duration metric: took 178.284197ms to wait for apiserver process to appear ...
	I1213 13:38:13.783678  593655 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:38:13.783706  593655 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1213 13:38:13.788696  593655 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1213 13:38:13.789700  593655 api_server.go:141] control plane version: v1.34.2
	I1213 13:38:13.789730  593655 api_server.go:131] duration metric: took 6.044088ms to wait for apiserver health ...
	I1213 13:38:13.789741  593655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:38:13.793630  593655 system_pods.go:59] 8 kube-system pods found
	I1213 13:38:13.793655  593655 system_pods.go:61] "coredns-66bc5c9577-5fv2k" [1e6a39a0-3609-42c3-8532-b6f8ceffda42] Running
	I1213 13:38:13.793660  593655 system_pods.go:61] "coredns-66bc5c9577-t7b79" [019ff04f-8424-4ee8-954f-eb1c487771ce] Running
	I1213 13:38:13.793664  593655 system_pods.go:61] "etcd-pause-484783" [5b4441d6-1c6a-4fed-89f8-7ca5b757902e] Running
	I1213 13:38:13.793667  593655 system_pods.go:61] "kindnet-lr5xb" [4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20] Running
	I1213 13:38:13.793671  593655 system_pods.go:61] "kube-apiserver-pause-484783" [f06441a2-a912-4ea8-9ff6-840d8d955998] Running
	I1213 13:38:13.793675  593655 system_pods.go:61] "kube-controller-manager-pause-484783" [5c7ba362-ed69-4ec3-8e13-815fc922a278] Running
	I1213 13:38:13.793679  593655 system_pods.go:61] "kube-proxy-kn5hh" [9fd4100d-49a1-4083-b901-4f4583d22ac0] Running
	I1213 13:38:13.793682  593655 system_pods.go:61] "kube-scheduler-pause-484783" [a91afa15-1576-461c-8806-66dd0f5b9209] Running
	I1213 13:38:13.793687  593655 system_pods.go:74] duration metric: took 3.939525ms to wait for pod list to return data ...
	I1213 13:38:13.793693  593655 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:38:13.795586  593655 default_sa.go:45] found service account: "default"
	I1213 13:38:13.795619  593655 default_sa.go:55] duration metric: took 1.903411ms for default service account to be created ...
	I1213 13:38:13.795629  593655 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:38:13.798236  593655 system_pods.go:86] 8 kube-system pods found
	I1213 13:38:13.798264  593655 system_pods.go:89] "coredns-66bc5c9577-5fv2k" [1e6a39a0-3609-42c3-8532-b6f8ceffda42] Running
	I1213 13:38:13.798273  593655 system_pods.go:89] "coredns-66bc5c9577-t7b79" [019ff04f-8424-4ee8-954f-eb1c487771ce] Running
	I1213 13:38:13.798279  593655 system_pods.go:89] "etcd-pause-484783" [5b4441d6-1c6a-4fed-89f8-7ca5b757902e] Running
	I1213 13:38:13.798284  593655 system_pods.go:89] "kindnet-lr5xb" [4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20] Running
	I1213 13:38:13.798293  593655 system_pods.go:89] "kube-apiserver-pause-484783" [f06441a2-a912-4ea8-9ff6-840d8d955998] Running
	I1213 13:38:13.798299  593655 system_pods.go:89] "kube-controller-manager-pause-484783" [5c7ba362-ed69-4ec3-8e13-815fc922a278] Running
	I1213 13:38:13.798308  593655 system_pods.go:89] "kube-proxy-kn5hh" [9fd4100d-49a1-4083-b901-4f4583d22ac0] Running
	I1213 13:38:13.798313  593655 system_pods.go:89] "kube-scheduler-pause-484783" [a91afa15-1576-461c-8806-66dd0f5b9209] Running
	I1213 13:38:13.798321  593655 system_pods.go:126] duration metric: took 2.681313ms to wait for k8s-apps to be running ...
	I1213 13:38:13.798333  593655 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:38:13.798386  593655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:38:13.811491  593655 system_svc.go:56] duration metric: took 13.14952ms WaitForService to wait for kubelet
	I1213 13:38:13.811518  593655 kubeadm.go:587] duration metric: took 206.157232ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:38:13.811540  593655 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:38:13.814176  593655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:38:13.814216  593655 node_conditions.go:123] node cpu capacity is 8
	I1213 13:38:13.814234  593655 node_conditions.go:105] duration metric: took 2.687845ms to run NodePressure ...
	I1213 13:38:13.814250  593655 start.go:242] waiting for startup goroutines ...
	I1213 13:38:13.814261  593655 start.go:247] waiting for cluster config update ...
	I1213 13:38:13.814272  593655 start.go:256] writing updated cluster config ...
	I1213 13:38:13.814634  593655 ssh_runner.go:195] Run: rm -f paused
	I1213 13:38:13.818426  593655 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:38:13.819005  593655 kapi.go:59] client config for pause-484783: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.key", CAFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 13:38:13.821755  593655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5fv2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.826281  593655 pod_ready.go:94] pod "coredns-66bc5c9577-5fv2k" is "Ready"
	I1213 13:38:13.826308  593655 pod_ready.go:86] duration metric: took 4.507585ms for pod "coredns-66bc5c9577-5fv2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.826319  593655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t7b79" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.830514  593655 pod_ready.go:94] pod "coredns-66bc5c9577-t7b79" is "Ready"
	I1213 13:38:13.830537  593655 pod_ready.go:86] duration metric: took 4.210977ms for pod "coredns-66bc5c9577-t7b79" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.832409  593655 pod_ready.go:83] waiting for pod "etcd-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.836262  593655 pod_ready.go:94] pod "etcd-pause-484783" is "Ready"
	I1213 13:38:13.836283  593655 pod_ready.go:86] duration metric: took 3.856519ms for pod "etcd-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.838012  593655 pod_ready.go:83] waiting for pod "kube-apiserver-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:12.831415  592646 cli_runner.go:164] Run: docker network inspect stopped-upgrade-627277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:38:12.852704  592646 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 13:38:12.857031  592646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:38:12.870140  592646 kubeadm.go:883] updating cluster {Name:stopped-upgrade-627277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-627277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:38:12.870286  592646 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 13:38:12.870355  592646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.957362  592646 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.957382  592646 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:38:12.957440  592646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.996286  592646 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.996299  592646 cache_images.go:84] Images are preloaded, skipping loading
	I1213 13:38:12.996307  592646 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1213 13:38:12.996396  592646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-627277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-627277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:38:12.996460  592646 ssh_runner.go:195] Run: crio config
	I1213 13:38:13.044192  592646 cni.go:84] Creating CNI manager for ""
	I1213 13:38:13.044209  592646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:38:13.044222  592646 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 13:38:13.044253  592646 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-627277 NodeName:stopped-upgrade-627277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:38:13.044449  592646 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-627277"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:38:13.044531  592646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1213 13:38:13.055329  592646 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 13:38:13.055393  592646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:38:13.065282  592646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 13:38:13.084364  592646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:38:13.105965  592646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1213 13:38:13.126860  592646 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:38:13.131572  592646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:38:13.144127  592646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:13.223929  592646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:38:13.247791  592646 certs.go:68] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277 for IP: 192.168.85.2
	I1213 13:38:13.247808  592646 certs.go:194] generating shared ca certs ...
	I1213 13:38:13.247830  592646 certs.go:226] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.247993  592646 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:38:13.248040  592646 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:38:13.248046  592646 certs.go:256] generating profile certs ...
	I1213 13:38:13.248138  592646 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key
	I1213 13:38:13.248152  592646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt with IP's: []
	I1213 13:38:13.537510  592646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt ...
	I1213 13:38:13.537529  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt: {Name:mk869752f878e37fe04e50ccc342273c02967a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.537706  592646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key ...
	I1213 13:38:13.537717  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key: {Name:mk87c0f9f176710f4412fd732d5cc61d024a4440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.537862  592646 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589
	I1213 13:38:13.537877  592646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 13:38:13.640950  592646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589 ...
	I1213 13:38:13.640968  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589: {Name:mkd1d7a248ab81559206b0df2067477218dcb7be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.641113  592646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589 ...
	I1213 13:38:13.641121  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589: {Name:mkaeb313542685a81d95cde202a259e441c6c1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.641196  592646 certs.go:381] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt
	I1213 13:38:13.641262  592646 certs.go:385] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key
	I1213 13:38:13.641308  592646 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key
	I1213 13:38:13.641319  592646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt with IP's: []
	I1213 13:38:13.895634  592646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt ...
	I1213 13:38:13.895651  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt: {Name:mk607aedb5ba13e5ac459a51d2799265d0c01fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.895851  592646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key ...
	I1213 13:38:13.895861  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key: {Name:mk1dfdf1437138721eeca3ac125ce29414725d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.896107  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:38:13.896143  592646 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:38:13.896156  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:38:13.896178  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:38:13.896196  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:38:13.896219  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:38:13.896255  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:13.896995  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:38:13.923164  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:38:13.947713  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:38:13.972159  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:38:13.996923  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 13:38:14.020884  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:38:14.046975  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:38:14.072090  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 13:38:14.098114  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:38:14.130405  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:38:14.160044  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:38:14.188863  592646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:38:14.210034  592646 ssh_runner.go:195] Run: openssl version
	I1213 13:38:14.215663  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 13:38:14.226130  592646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:14.229920  592646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:14.229962  592646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:14.236899  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 13:38:14.246638  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/394130.pem && ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem"
	I1213 13:38:14.256647  592646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:38:14.260905  592646 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:38:14.260957  592646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:38:14.267822  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0"
	I1213 13:38:14.278071  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3941302.pem && ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem"
	I1213 13:38:14.290132  592646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:38:14.293998  592646 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:38:14.294067  592646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:38:14.300995  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 13:38:14.311377  592646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:38:14.315281  592646 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:38:14.315349  592646 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-627277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-627277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:38:14.315447  592646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:38:14.315526  592646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:38:14.355748  592646 cri.go:89] found id: ""
	I1213 13:38:14.355850  592646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:38:14.365959  592646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:38:14.376620  592646 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:38:14.376674  592646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:38:14.387243  592646 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:38:14.387260  592646 kubeadm.go:157] found existing configuration files:
	
	I1213 13:38:14.387311  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:38:14.397268  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:38:14.397329  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:38:14.406254  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:38:14.415285  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:38:14.415335  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:38:14.424347  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:38:14.433596  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:38:14.433641  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:38:14.442606  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:38:14.451995  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:38:14.452043  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:38:14.460700  592646 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:38:14.520030  592646 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:38:14.575326  592646 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:38:14.023345  593655 pod_ready.go:94] pod "kube-apiserver-pause-484783" is "Ready"
	I1213 13:38:14.023372  593655 pod_ready.go:86] duration metric: took 185.338795ms for pod "kube-apiserver-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:14.222185  593655 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:14.623294  593655 pod_ready.go:94] pod "kube-controller-manager-pause-484783" is "Ready"
	I1213 13:38:14.623327  593655 pod_ready.go:86] duration metric: took 401.115125ms for pod "kube-controller-manager-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:14.822449  593655 pod_ready.go:83] waiting for pod "kube-proxy-kn5hh" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.222275  593655 pod_ready.go:94] pod "kube-proxy-kn5hh" is "Ready"
	I1213 13:38:15.222306  593655 pod_ready.go:86] duration metric: took 399.827535ms for pod "kube-proxy-kn5hh" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.422486  593655 pod_ready.go:83] waiting for pod "kube-scheduler-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.822550  593655 pod_ready.go:94] pod "kube-scheduler-pause-484783" is "Ready"
	I1213 13:38:15.822584  593655 pod_ready.go:86] duration metric: took 400.071657ms for pod "kube-scheduler-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.822597  593655 pod_ready.go:40] duration metric: took 2.004130124s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:38:15.871402  593655 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:38:16.037568  593655 out.go:179] * Done! kubectl is now configured to use "pause-484783" cluster and "default" namespace by default
	I1213 13:38:12.079836  595557 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:38:12.080133  595557 start.go:159] libmachine.API.Create for "missing-upgrade-533439" (driver="docker")
	I1213 13:38:12.080174  595557 client.go:168] LocalClient.Create starting
	I1213 13:38:12.080290  595557 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:38:12.080335  595557 main.go:141] libmachine: Decoding PEM data...
	I1213 13:38:12.080354  595557 main.go:141] libmachine: Parsing certificate...
	I1213 13:38:12.080421  595557 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:38:12.080443  595557 main.go:141] libmachine: Decoding PEM data...
	I1213 13:38:12.080455  595557 main.go:141] libmachine: Parsing certificate...
	I1213 13:38:12.080998  595557 cli_runner.go:164] Run: docker network inspect missing-upgrade-533439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:38:12.099189  595557 cli_runner.go:211] docker network inspect missing-upgrade-533439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:38:12.099259  595557 network_create.go:284] running [docker network inspect missing-upgrade-533439] to gather additional debugging logs...
	I1213 13:38:12.099271  595557 cli_runner.go:164] Run: docker network inspect missing-upgrade-533439
	W1213 13:38:12.114991  595557 cli_runner.go:211] docker network inspect missing-upgrade-533439 returned with exit code 1
	I1213 13:38:12.115014  595557 network_create.go:287] error running [docker network inspect missing-upgrade-533439]: docker network inspect missing-upgrade-533439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-533439 not found
	I1213 13:38:12.115047  595557 network_create.go:289] output of [docker network inspect missing-upgrade-533439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-533439 not found
	
	** /stderr **
	I1213 13:38:12.115174  595557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:38:12.132586  595557 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:38:12.134001  595557 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:38:12.135003  595557 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:38:12.136502  595557 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001faa2a0}
	I1213 13:38:12.136537  595557 network_create.go:124] attempt to create docker network missing-upgrade-533439 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:38:12.136604  595557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-533439 missing-upgrade-533439
	I1213 13:38:12.193451  595557 network_create.go:108] docker network missing-upgrade-533439 192.168.76.0/24 created
	I1213 13:38:12.193479  595557 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-533439" container
	I1213 13:38:12.193543  595557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:38:12.212872  595557 cli_runner.go:164] Run: docker volume create missing-upgrade-533439 --label name.minikube.sigs.k8s.io=missing-upgrade-533439 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:38:12.232265  595557 oci.go:103] Successfully created a docker volume missing-upgrade-533439
	I1213 13:38:12.232333  595557 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-533439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-533439 --entrypoint /usr/bin/test -v missing-upgrade-533439:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1213 13:38:12.645625  595557 oci.go:107] Successfully prepared a docker volume missing-upgrade-533439
	I1213 13:38:12.645682  595557 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 13:38:12.645706  595557 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:38:12.645857  595557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-533439:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.231764963Z" level=info msg="Conmon does support the --sync option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.231808825Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.231823535Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.232576661Z" level=info msg="Conmon does support the --sync option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.232600859Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.23675402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.236783453Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.237273036Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.237669324Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.237714669Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.314718234Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-t7b79 Namespace:kube-system ID:8a5f1aacc4bcc505676ea3a5a8af63bfe2c975687cbed8be179eda960b966327 UID:019ff04f-8424-4ee8-954f-eb1c487771ce NetNS:/var/run/netns/30fe85d5-55f9-4ca1-9c28-231814ce4081 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005983f0}] Aliases:map[]}"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.314954643Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-t7b79 for CNI network kindnet (type=ptp)"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.315360905Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-5fv2k Namespace:kube-system ID:d5cc912d29fe6e432b9cc575cd37322bf0c4481831e94c7c60e0ba392501fff1 UID:1e6a39a0-3609-42c3-8532-b6f8ceffda42 NetNS:/var/run/netns/63647a29-d47b-454d-ba10-2c0f00ba22a3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000598550}] Aliases:map[]}"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.315525832Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-5fv2k for CNI network kindnet (type=ptp)"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316408194Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316436775Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316512534Z" level=info msg="Create NRI interface"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316722259Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316745925Z" level=info msg="runtime interface created"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316765332Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316797993Z" level=info msg="runtime interface starting up..."
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316807275Z" level=info msg="starting plugins..."
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316833057Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.317414439Z" level=info msg="No systemd watchdog enabled"
	Dec 13 13:38:12 pause-484783 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b7a079362a05a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   8a5f1aacc4bcc       coredns-66bc5c9577-t7b79               kube-system
	b12706571c86e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   d5cc912d29fe6       coredns-66bc5c9577-5fv2k               kube-system
	a86b7f66e2ebc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   55 seconds ago       Running             kindnet-cni               0                   ac18ce2a45e1f       kindnet-lr5xb                          kube-system
	ac41f2f2f9b1b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   55 seconds ago       Running             kube-proxy                0                   44da173cdf9bf       kube-proxy-kn5hh                       kube-system
	9c7bf796178c3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   edf0101a9d385       etcd-pause-484783                      kube-system
	d3b97f25c25e6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   8e4a39d30480e       kube-scheduler-pause-484783            kube-system
	15061cf1e0286       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   b5d6ee19724e7       kube-controller-manager-pause-484783   kube-system
	199ba1838f628       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   b0bda7c8295df       kube-apiserver-pause-484783            kube-system
	
	
	==> coredns [b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35787 - 5482 "HINFO IN 8757055889704573740.2148341591570753963. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097271417s
	
	
	==> coredns [b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33401 - 43482 "HINFO IN 1170256902137577025.8256869183820981561. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079908291s
	
	
	==> describe nodes <==
	Name:               pause-484783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-484783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=pause-484783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_37_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:37:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-484783
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:38:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:38:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-484783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                155ba732-0e2c-4b6e-8ad1-6aafd2a9edb7
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5fv2k                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 coredns-66bc5c9577-t7b79                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-pause-484783                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-lr5xb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-pause-484783             250m (3%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-pause-484783    200m (2%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-kn5hh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-pause-484783             100m (1%)     0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node pause-484783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node pause-484783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node pause-484783 status is now: NodeHasSufficientPID
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s                kubelet          Node pause-484783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s                kubelet          Node pause-484783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s                kubelet          Node pause-484783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node pause-484783 event: Registered Node pause-484783 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-484783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea b7 dd 32 fb 08 08 06
	[  +0.000396] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be c4 f7 a4 8d 16 08 06
	[Dec13 13:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.009708] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.024845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.022879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.024907] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +2.047757] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +4.030610] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +8.255132] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[Dec13 13:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	
	
	==> etcd [9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e] <==
	{"level":"info","ts":"2025-12-13T13:37:24.351403Z","caller":"traceutil/trace.go:172","msg":"trace[1328650903] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"165.767063ms","start":"2025-12-13T13:37:24.185627Z","end":"2025-12-13T13:37:24.351394Z","steps":["trace[1328650903] 'process raft request'  (duration: 165.713566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.351435Z","caller":"traceutil/trace.go:172","msg":"trace[114704394] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"322.744746ms","start":"2025-12-13T13:37:24.028682Z","end":"2025-12-13T13:37:24.351426Z","steps":["trace[114704394] 'process raft request'  (duration: 322.44624ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:37:24.351484Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.765333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351468Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.027163Z","time spent":"324.011467ms","remote":"127.0.0.1:45712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":697,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:640 >> failure:<>"}
	{"level":"info","ts":"2025-12-13T13:37:24.351515Z","caller":"traceutil/trace.go:172","msg":"trace[1356278954] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:349; }","duration":"188.804034ms","start":"2025-12-13T13:37:24.162703Z","end":"2025-12-13T13:37:24.351507Z","steps":["trace[1356278954] 'agreement among raft nodes before linearized reading'  (duration: 188.703078ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.351560Z","caller":"traceutil/trace.go:172","msg":"trace[2001224177] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"223.572701ms","start":"2025-12-13T13:37:24.127977Z","end":"2025-12-13T13:37:24.351550Z","steps":["trace[2001224177] 'process raft request'  (duration: 223.23447ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:37:24.351650Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.028663Z","time spent":"322.787414ms","remote":"127.0.0.1:46776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":899,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:111 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:863 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351657Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.029128Z","time spent":"322.259876ms","remote":"127.0.0.1:46776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2194,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:113 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:2159 >> failure:<request_range:<key:\"/registry/clusterroles/view\" > >"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351715Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.482531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-484783\" limit:1 ","response":"range_response_count:1 size:7498"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351729Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.028609Z","time spent":"322.798364ms","remote":"127.0.0.1:46776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2299,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/edit\" mod_revision:112 > success:<request_put:<key:\"/registry/clusterroles/edit\" value_size:2264 >> failure:<request_range:<key:\"/registry/clusterroles/edit\" > >"}
	{"level":"info","ts":"2025-12-13T13:37:24.351756Z","caller":"traceutil/trace.go:172","msg":"trace[379070677] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-484783; range_end:; response_count:1; response_revision:349; }","duration":"205.525624ms","start":"2025-12-13T13:37:24.146221Z","end":"2025-12-13T13:37:24.351746Z","steps":["trace[379070677] 'agreement among raft nodes before linearized reading'  (duration: 205.393425ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:37:24.351841Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.027232Z","time spent":"323.984412ms","remote":"127.0.0.1:46514","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":805,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/servicecidrs/kubernetes\" mod_revision:16 > success:<request_put:<key:\"/registry/servicecidrs/kubernetes\" value_size:764 >> failure:<request_range:<key:\"/registry/servicecidrs/kubernetes\" > >"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351922Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.027282Z","time spent":"323.928561ms","remote":"127.0.0.1:47386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3726,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kindnet-78f866cbfd\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kindnet-78f866cbfd\" value_size:3658 >> failure:<>"}
	{"level":"info","ts":"2025-12-13T13:37:24.351120Z","caller":"traceutil/trace.go:172","msg":"trace[1291544517] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:340; }","duration":"238.984308ms","start":"2025-12-13T13:37:24.112119Z","end":"2025-12-13T13:37:24.351104Z","steps":["trace[1291544517] 'agreement among raft nodes before linearized reading'  (duration: 153.536622ms)","trace[1291544517] 'range keys from in-memory index tree'  (duration: 85.19425ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:37:24.351964Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.028969Z","time spent":"322.36154ms","remote":"127.0.0.1:47386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-66d5f8d6f6\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-66d5f8d6f6\" value_size:2031 >> failure:<>"}
	{"level":"warn","ts":"2025-12-13T13:37:24.717553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.331015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-12-13T13:37:24.717618Z","caller":"traceutil/trace.go:172","msg":"trace[387483988] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:374; }","duration":"203.399223ms","start":"2025-12-13T13:37:24.514203Z","end":"2025-12-13T13:37:24.717602Z","steps":["trace[387483988] 'agreement among raft nodes before linearized reading'  (duration: 143.134741ms)","trace[387483988] 'range keys from in-memory index tree'  (duration: 60.080872ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:37:24.717550Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.224309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-kn5hh\" limit:1 ","response":"range_response_count:1 size:3429"}
	{"level":"info","ts":"2025-12-13T13:37:24.717703Z","caller":"traceutil/trace.go:172","msg":"trace[1441952844] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-kn5hh; range_end:; response_count:1; response_revision:374; }","duration":"203.38692ms","start":"2025-12-13T13:37:24.514302Z","end":"2025-12-13T13:37:24.717689Z","steps":["trace[1441952844] 'agreement among raft nodes before linearized reading'  (duration: 143.059443ms)","trace[1441952844] 'range keys from in-memory index tree'  (duration: 60.08121ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:37:24.717714Z","caller":"traceutil/trace.go:172","msg":"trace[11575817] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"194.659378ms","start":"2025-12-13T13:37:24.523044Z","end":"2025-12-13T13:37:24.717703Z","steps":["trace[11575817] 'process raft request'  (duration: 194.616263ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.717840Z","caller":"traceutil/trace.go:172","msg":"trace[2139430813] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"203.484711ms","start":"2025-12-13T13:37:24.514345Z","end":"2025-12-13T13:37:24.717829Z","steps":["trace[2139430813] 'process raft request'  (duration: 143.044553ms)","trace[2139430813] 'compare'  (duration: 59.981007ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:37:24.717997Z","caller":"traceutil/trace.go:172","msg":"trace[443948987] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"200.951047ms","start":"2025-12-13T13:37:24.517037Z","end":"2025-12-13T13:37:24.717988Z","steps":["trace[443948987] 'process raft request'  (duration: 200.553967ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.717854Z","caller":"traceutil/trace.go:172","msg":"trace[932506412] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"200.739931ms","start":"2025-12-13T13:37:24.517103Z","end":"2025-12-13T13:37:24.717843Z","steps":["trace[932506412] 'process raft request'  (duration: 200.529833ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.724615Z","caller":"traceutil/trace.go:172","msg":"trace[1883250059] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"128.724544ms","start":"2025-12-13T13:37:24.595749Z","end":"2025-12-13T13:37:24.724474Z","steps":["trace[1883250059] 'process raft request'  (duration: 128.536164ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:56.893837Z","caller":"traceutil/trace.go:172","msg":"trace[290670254] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"174.713526ms","start":"2025-12-13T13:37:56.719106Z","end":"2025-12-13T13:37:56.893819Z","steps":["trace[290670254] 'process raft request'  (duration: 174.498271ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:38:20 up  2:20,  0 user,  load average: 3.37, 1.99, 1.53
	Linux pause-484783 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88] <==
	I1213 13:37:25.080954       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:37:25.081240       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 13:37:25.081400       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:37:25.081415       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:37:25.081424       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:37:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:37:25.381240       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:37:25.381284       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:37:25.381299       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:37:25.381604       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 13:37:55.382930       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 13:37:55.382934       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 13:37:55.382975       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 13:37:55.382904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1213 13:37:56.781700       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:37:56.781739       1 metrics.go:72] Registering metrics
	I1213 13:37:56.781860       1 controller.go:711] "Syncing nftables rules"
	I1213 13:38:05.387963       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:38:05.388034       1 main.go:301] handling current node
	I1213 13:38:15.382141       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:38:15.382221       1 main.go:301] handling current node
	
	
	==> kube-apiserver [199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88] <==
	I1213 13:37:15.805484       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:37:15.805491       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:37:15.805498       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:37:15.807248       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 13:37:15.809539       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:37:15.823028       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:37:15.847788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:37:15.854304       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:37:16.700597       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 13:37:16.705414       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:37:16.705432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:37:17.224488       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:37:17.266617       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:37:17.409546       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:37:17.416023       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1213 13:37:17.417458       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:37:17.422299       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:37:18.068936       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:37:18.367205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:37:18.375135       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:37:18.383007       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:37:24.026519       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:37:24.125163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:37:24.355014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:37:24.396820       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac] <==
	I1213 13:37:23.065924       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 13:37:23.065848       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:37:23.066087       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 13:37:23.066296       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 13:37:23.067298       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-484783" podCIDRs=["10.244.0.0/24"]
	I1213 13:37:23.067475       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 13:37:23.074894       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 13:37:23.074933       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:37:23.085108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:37:23.114816       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:37:23.114820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:37:23.114910       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:37:23.114926       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:37:23.115010       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 13:37:23.115196       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 13:37:23.116051       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 13:37:23.116072       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 13:37:23.116188       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:37:23.116593       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:37:23.119433       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 13:37:23.121616       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:37:23.125903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 13:37:23.132361       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 13:37:23.140646       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 13:38:08.073012       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01] <==
	I1213 13:37:24.941196       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:37:25.009674       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:37:25.110258       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:37:25.110302       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 13:37:25.110411       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:37:25.135337       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:37:25.135384       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:37:25.140928       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:37:25.141317       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:37:25.141356       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:37:25.143041       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:37:25.143074       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:37:25.143131       1 config.go:200] "Starting service config controller"
	I1213 13:37:25.143142       1 config.go:309] "Starting node config controller"
	I1213 13:37:25.143153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:37:25.143155       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:37:25.143161       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:37:25.143145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:37:25.243279       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:37:25.243341       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:37:25.243414       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:37:25.244081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195] <==
	E1213 13:37:15.770148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:37:15.770275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:37:15.770435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:37:15.770547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:37:15.770705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:37:15.770702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:37:15.770727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:37:15.770786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:37:15.770843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:37:15.770444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:37:15.771174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:37:15.771176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:37:15.771284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:37:15.771430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:37:15.771436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:37:15.771429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:37:16.654823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:37:16.729455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:37:16.746865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:37:16.771276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:37:16.826586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:37:16.842736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:37:16.892419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:37:16.950616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1213 13:37:19.767310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.571936    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fd4100d-49a1-4083-b901-4f4583d22ac0-kube-proxy\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.571971    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fd4100d-49a1-4083-b901-4f4583d22ac0-xtables-lock\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.571989    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fd4100d-49a1-4083-b901-4f4583d22ac0-lib-modules\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572002    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-xtables-lock\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572031    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shb6r\" (UniqueName: \"kubernetes.io/projected/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-kube-api-access-shb6r\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572160    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mck98\" (UniqueName: \"kubernetes.io/projected/9fd4100d-49a1-4083-b901-4f4583d22ac0-kube-api-access-mck98\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572203    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-cni-cfg\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572227    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-lib-modules\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:25 pause-484783 kubelet[1308]: I1213 13:37:25.321214    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kn5hh" podStartSLOduration=1.321150401 podStartE2EDuration="1.321150401s" podCreationTimestamp="2025-12-13 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:37:25.32092217 +0000 UTC m=+7.181981914" watchObservedRunningTime="2025-12-13 13:37:25.321150401 +0000 UTC m=+7.182210143"
	Dec 13 13:37:25 pause-484783 kubelet[1308]: I1213 13:37:25.321361    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lr5xb" podStartSLOduration=1.321353269 podStartE2EDuration="1.321353269s" podCreationTimestamp="2025-12-13 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:37:25.29738797 +0000 UTC m=+7.158447712" watchObservedRunningTime="2025-12-13 13:37:25.321353269 +0000 UTC m=+7.182413010"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.806409    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.878545    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84b8\" (UniqueName: \"kubernetes.io/projected/1e6a39a0-3609-42c3-8532-b6f8ceffda42-kube-api-access-k84b8\") pod \"coredns-66bc5c9577-5fv2k\" (UID: \"1e6a39a0-3609-42c3-8532-b6f8ceffda42\") " pod="kube-system/coredns-66bc5c9577-5fv2k"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.878609    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e6a39a0-3609-42c3-8532-b6f8ceffda42-config-volume\") pod \"coredns-66bc5c9577-5fv2k\" (UID: \"1e6a39a0-3609-42c3-8532-b6f8ceffda42\") " pod="kube-system/coredns-66bc5c9577-5fv2k"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.979844    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstcs\" (UniqueName: \"kubernetes.io/projected/019ff04f-8424-4ee8-954f-eb1c487771ce-kube-api-access-jstcs\") pod \"coredns-66bc5c9577-t7b79\" (UID: \"019ff04f-8424-4ee8-954f-eb1c487771ce\") " pod="kube-system/coredns-66bc5c9577-t7b79"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.979931    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/019ff04f-8424-4ee8-954f-eb1c487771ce-config-volume\") pod \"coredns-66bc5c9577-t7b79\" (UID: \"019ff04f-8424-4ee8-954f-eb1c487771ce\") " pod="kube-system/coredns-66bc5c9577-t7b79"
	Dec 13 13:38:06 pause-484783 kubelet[1308]: I1213 13:38:06.377365    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t7b79" podStartSLOduration=42.377341812 podStartE2EDuration="42.377341812s" podCreationTimestamp="2025-12-13 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:38:06.376988023 +0000 UTC m=+48.238047765" watchObservedRunningTime="2025-12-13 13:38:06.377341812 +0000 UTC m=+48.238401554"
	Dec 13 13:38:10 pause-484783 kubelet[1308]: W1213 13:38:10.248006    1308 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248106    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248199    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248220    1308 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248235    1308 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 13 13:38:16 pause-484783 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:38:16 pause-484783 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:38:16 pause-484783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:38:16 pause-484783 systemd[1]: kubelet.service: Consumed 2.393s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484783 -n pause-484783
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484783 -n pause-484783: exit status 2 (353.664599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-484783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-484783
helpers_test.go:244: (dbg) docker inspect pause-484783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9",
	        "Created": "2025-12-13T13:36:59.032097531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 580902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:36:59.086678769Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/hosts",
	        "LogPath": "/var/lib/docker/containers/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9/0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9-json.log",
	        "Name": "/pause-484783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-484783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-484783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0642b40c1d8b1a2a6627eb9e57674aa6e6928ebaed3ef0966cf52ae4443cd7d9",
	                "LowerDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57a4d12983411d33d877cc5ffcd68c2da87be1f108a31936a0a5c7efa16199ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-484783",
	                "Source": "/var/lib/docker/volumes/pause-484783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-484783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-484783",
	                "name.minikube.sigs.k8s.io": "pause-484783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "94b49d9ab8531277e0a2021e8bd4b09d1ac367d64ff929bcf8a10ba58ee5050f",
	            "SandboxKey": "/var/run/docker/netns/94b49d9ab853",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-484783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aa75236e9aa704210586687d6c0989e3467da5e3e0be0c2abcfcc1acaecc8c9b",
	                    "EndpointID": "09b410d4a8b13e2e415fd009c05f0223bbfae9a2e7c9b86c70c075866c338f00",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "e6:5f:43:8d:f9:8e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-484783",
	                        "0642b40c1d8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-484783 -n pause-484783
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-484783 -n pause-484783: exit status 2 (371.346061ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-484783 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-484783 logs -n 25: (1.047471777s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-345214 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --cancel-scheduled                                                                                 │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │ 13 Dec 25 13:35 UTC │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │                     │
	│ stop    │ -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:35 UTC │ 13 Dec 25 13:36 UTC │
	│ delete  │ -p scheduled-stop-345214                                                                                                    │ scheduled-stop-345214       │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:36 UTC │
	│ start   │ -p insufficient-storage-215505 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-215505 │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │                     │
	│ delete  │ -p insufficient-storage-215505                                                                                              │ insufficient-storage-215505 │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:36 UTC │
	│ start   │ -p offline-crio-444562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-444562         │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p cert-expiration-541985 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-541985      │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:37 UTC │
	│ start   │ -p pause-484783 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-484783                │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p force-systemd-env-488734 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-488734    │ jenkins │ v1.37.0 │ 13 Dec 25 13:36 UTC │ 13 Dec 25 13:37 UTC │
	│ delete  │ -p force-systemd-env-488734                                                                                                 │ force-systemd-env-488734    │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ start   │ -p force-systemd-flag-212830 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-212830   │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ ssh     │ force-systemd-flag-212830 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-212830   │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ delete  │ -p force-systemd-flag-212830                                                                                                │ force-systemd-flag-212830   │ jenkins │ v1.37.0 │ 13 Dec 25 13:37 UTC │ 13 Dec 25 13:37 UTC │
	│ start   │ -p stopped-upgrade-627277 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-627277      │ jenkins │ v1.35.0 │ 13 Dec 25 13:37 UTC │                     │
	│ delete  │ -p offline-crio-444562                                                                                                      │ offline-crio-444562         │ jenkins │ v1.37.0 │ 13 Dec 25 13:38 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p pause-484783 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-484783                │ jenkins │ v1.37.0 │ 13 Dec 25 13:38 UTC │ 13 Dec 25 13:38 UTC │
	│ start   │ -p missing-upgrade-533439 --memory=3072 --driver=docker  --container-runtime=crio                                           │ missing-upgrade-533439      │ jenkins │ v1.35.0 │ 13 Dec 25 13:38 UTC │                     │
	│ pause   │ -p pause-484783 --alsologtostderr -v=5                                                                                      │ pause-484783                │ jenkins │ v1.37.0 │ 13 Dec 25 13:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:38:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:38:11.654356  595557 out.go:345] Setting OutFile to fd 1 ...
	I1213 13:38:11.654440  595557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 13:38:11.654443  595557 out.go:358] Setting ErrFile to fd 2...
	I1213 13:38:11.654447  595557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 13:38:11.654657  595557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:38:11.655151  595557 out.go:352] Setting JSON to false
	I1213 13:38:11.656579  595557 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8440,"bootTime":1765624652,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:38:11.656706  595557 start.go:139] virtualization: kvm guest
	I1213 13:38:11.658842  595557 out.go:177] * [missing-upgrade-533439] minikube v1.35.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:38:11.660280  595557 notify.go:220] Checking for updates...
	I1213 13:38:11.660297  595557 out.go:177]   - MINIKUBE_LOCATION=22122
	I1213 13:38:11.661447  595557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:38:11.663487  595557 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:38:11.664651  595557 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:38:11.665747  595557 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:38:11.667669  595557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:38:11.669108  595557 config.go:182] Loaded profile config "cert-expiration-541985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:11.669234  595557 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:11.669329  595557 config.go:182] Loaded profile config "stopped-upgrade-627277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:38:11.669419  595557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 13:38:11.696444  595557 docker.go:123] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:38:11.696534  595557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:38:11.775244  595557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:38:11.763905703 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:38:11.775400  595557 docker.go:318] overlay module found
	I1213 13:38:11.780993  595557 out.go:177] * Using the docker driver based on user configuration
	I1213 13:38:11.782112  595557 start.go:297] selected driver: docker
	I1213 13:38:11.782120  595557 start.go:901] validating driver "docker" against <nil>
	I1213 13:38:11.782130  595557 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:38:11.782869  595557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:38:11.852500  595557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:38:11.839894256 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:38:11.852745  595557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 13:38:11.853083  595557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:38:11.854753  595557 out.go:177] * Using Docker driver with root privileges
	I1213 13:38:11.855821  595557 cni.go:84] Creating CNI manager for ""
	I1213 13:38:11.855896  595557 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:38:11.855906  595557 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:38:11.855970  595557 start.go:340] cluster config:
	{Name:missing-upgrade-533439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-533439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:38:11.858848  595557 out.go:177] * Starting "missing-upgrade-533439" primary control-plane node in "missing-upgrade-533439" cluster
	I1213 13:38:11.892200  595557 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 13:38:11.913601  595557 out.go:177] * Pulling base image v0.0.46 ...
	I1213 13:38:11.999006  595557 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1213 13:38:11.999024  595557 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 13:38:11.999089  595557 preload.go:146] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:38:11.999097  595557 cache.go:56] Caching tarball of preloaded images
	I1213 13:38:11.999204  595557 preload.go:172] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:38:11.999209  595557 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1213 13:38:11.999306  595557 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/missing-upgrade-533439/config.json ...
	I1213 13:38:11.999321  595557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/missing-upgrade-533439/config.json: {Name:mk3aa711e2dc47d91f713899ca00e0889aacdb3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:12.021113  595557 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1213 13:38:12.021134  595557 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1213 13:38:12.021156  595557 cache.go:227] Successfully downloaded all kic artifacts
	I1213 13:38:12.021188  595557 start.go:360] acquireMachinesLock for missing-upgrade-533439: {Name:mk421c651e43a49875cb2c7dbe4365c6871bf96b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:38:12.021297  595557 start.go:364] duration metric: took 91.843µs to acquireMachinesLock for "missing-upgrade-533439"
	I1213 13:38:12.021321  595557 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-533439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-533439 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:38:12.021493  595557 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:38:09.181918  593655 out.go:252] * Updating the running docker "pause-484783" container ...
	I1213 13:38:09.181963  593655 machine.go:94] provisionDockerMachine start ...
	I1213 13:38:09.182041  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.204672  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:09.204998  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:09.205022  593655 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:38:09.344424  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484783
	
	I1213 13:38:09.344452  593655 ubuntu.go:182] provisioning hostname "pause-484783"
	I1213 13:38:09.344517  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.369850  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:09.370318  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:09.370341  593655 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-484783 && echo "pause-484783" | sudo tee /etc/hostname
	I1213 13:38:09.523913  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-484783
	
	I1213 13:38:09.524001  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.542589  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:09.542919  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:09.542952  593655 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-484783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-484783/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-484783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:38:09.680974  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:38:09.681009  593655 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:38:09.681054  593655 ubuntu.go:190] setting up certificates
	I1213 13:38:09.681066  593655 provision.go:84] configureAuth start
	I1213 13:38:09.681137  593655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484783
	I1213 13:38:09.700218  593655 provision.go:143] copyHostCerts
	I1213 13:38:09.700295  593655 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:38:09.700318  593655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:38:09.700408  593655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:38:09.700614  593655 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:38:09.700630  593655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:38:09.700684  593655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:38:09.700794  593655 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:38:09.700808  593655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:38:09.700854  593655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:38:09.700916  593655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.pause-484783 san=[127.0.0.1 192.168.103.2 localhost minikube pause-484783]
	I1213 13:38:09.807321  593655 provision.go:177] copyRemoteCerts
	I1213 13:38:09.807383  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:38:09.807437  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:09.825349  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:09.924112  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:38:09.944559  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:38:09.965328  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 13:38:09.982546  593655 provision.go:87] duration metric: took 301.455403ms to configureAuth
	I1213 13:38:09.982573  593655 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:38:09.982763  593655 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:09.982899  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.002279  593655 main.go:143] libmachine: Using SSH client type: native
	I1213 13:38:10.002492  593655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33356 <nil> <nil>}
	I1213 13:38:10.002509  593655 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:38:10.363493  593655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:38:10.363523  593655 machine.go:97] duration metric: took 1.181550965s to provisionDockerMachine
	I1213 13:38:10.363536  593655 start.go:293] postStartSetup for "pause-484783" (driver="docker")
	I1213 13:38:10.363549  593655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:38:10.363614  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:38:10.363663  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.384481  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.485108  593655 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:38:10.488757  593655 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:38:10.488809  593655 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:38:10.488823  593655 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:38:10.488897  593655 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:38:10.489006  593655 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:38:10.489132  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:38:10.497109  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:10.514822  593655 start.go:296] duration metric: took 151.269689ms for postStartSetup
	I1213 13:38:10.514901  593655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:38:10.514962  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.535952  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.632717  593655 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:38:10.637958  593655 fix.go:56] duration metric: took 1.477019951s for fixHost
	I1213 13:38:10.637983  593655 start.go:83] releasing machines lock for "pause-484783", held for 1.477066377s
	I1213 13:38:10.638063  593655 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-484783
	I1213 13:38:10.657527  593655 ssh_runner.go:195] Run: cat /version.json
	I1213 13:38:10.657593  593655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:38:10.657675  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.657595  593655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-484783
	I1213 13:38:10.676461  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.677531  593655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33356 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/pause-484783/id_rsa Username:docker}
	I1213 13:38:10.837761  593655 ssh_runner.go:195] Run: systemctl --version
	I1213 13:38:10.845399  593655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:38:10.887003  593655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:38:10.892313  593655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:38:10.892374  593655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:10.900652  593655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:38:10.900671  593655 start.go:496] detecting cgroup driver to use...
	I1213 13:38:10.900702  593655 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:38:10.900747  593655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:38:10.917085  593655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:38:10.929109  593655 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:38:10.929166  593655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:38:10.944563  593655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:38:10.957088  593655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:38:11.083351  593655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:38:11.231951  593655 docker.go:234] disabling docker service ...
	I1213 13:38:11.232016  593655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:38:11.254423  593655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:38:11.269173  593655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:38:11.429628  593655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:38:11.559359  593655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:38:11.575193  593655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:38:11.593474  593655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:38:11.593546  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.605405  593655 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:38:11.605468  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.615495  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.625506  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.635853  593655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:38:11.645891  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.656536  593655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.665674  593655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:11.675304  593655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:38:11.685103  593655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:38:11.694657  593655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:11.842605  593655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:38:12.323586  593655 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:38:12.323679  593655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:38:12.328068  593655 start.go:564] Will wait 60s for crictl version
	I1213 13:38:12.328136  593655 ssh_runner.go:195] Run: which crictl
	I1213 13:38:12.331951  593655 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:38:12.357193  593655 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:38:12.357275  593655 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.393647  593655 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.432169  593655 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 13:38:10.472074  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:38:10.504122  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 13:38:10.529366  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:38:10.555606  592646 provision.go:87] duration metric: took 348.215836ms to configureAuth
	I1213 13:38:10.555632  592646 ubuntu.go:193] setting minikube options for container-runtime
	I1213 13:38:10.555879  592646 config.go:182] Loaded profile config "stopped-upgrade-627277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:38:10.556045  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:10.574975  592646 main.go:141] libmachine: Using SSH client type: native
	I1213 13:38:10.575189  592646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33381 <nil> <nil>}
	I1213 13:38:10.575204  592646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:38:10.835543  592646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:38:10.835565  592646 machine.go:96] duration metric: took 1.092736866s to provisionDockerMachine
	I1213 13:38:10.835580  592646 client.go:171] duration metric: took 8.717598011s to LocalClient.Create
	I1213 13:38:10.835611  592646 start.go:167] duration metric: took 8.717667213s to libmachine.API.Create "stopped-upgrade-627277"
	I1213 13:38:10.835645  592646 start.go:293] postStartSetup for "stopped-upgrade-627277" (driver="docker")
	I1213 13:38:10.835659  592646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:38:10.835736  592646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:38:10.835831  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:10.857358  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:10.952999  592646 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:38:10.956394  592646 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:38:10.956416  592646 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 13:38:10.956429  592646 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 13:38:10.956435  592646 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 13:38:10.956446  592646 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:38:10.956498  592646 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:38:10.956564  592646 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:38:10.956651  592646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:38:10.965449  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:10.999475  592646 start.go:296] duration metric: took 163.812753ms for postStartSetup
	I1213 13:38:10.999990  592646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-627277
	I1213 13:38:11.030043  592646 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/config.json ...
	I1213 13:38:11.030289  592646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:38:11.030325  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:11.048127  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:11.140904  592646 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:38:11.146113  592646 start.go:128] duration metric: took 9.029929974s to createHost
	I1213 13:38:11.146131  592646 start.go:83] releasing machines lock for "stopped-upgrade-627277", held for 9.030049138s
	I1213 13:38:11.146200  592646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-627277
	I1213 13:38:11.169300  592646 ssh_runner.go:195] Run: cat /version.json
	I1213 13:38:11.169353  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:11.169412  592646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:38:11.169515  592646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-627277
	I1213 13:38:11.191531  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:11.191873  592646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33381 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/stopped-upgrade-627277/id_rsa Username:docker}
	I1213 13:38:11.282888  592646 ssh_runner.go:195] Run: systemctl --version
	I1213 13:38:11.385899  592646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:38:11.552365  592646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:38:11.559333  592646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:11.589086  592646 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 13:38:11.589163  592646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:11.625106  592646 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 13:38:11.625124  592646 start.go:495] detecting cgroup driver to use...
	I1213 13:38:11.625159  592646 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:38:11.625211  592646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:38:11.642234  592646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:38:11.655864  592646 docker.go:217] disabling cri-docker service (if available) ...
	I1213 13:38:11.655945  592646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:38:11.674167  592646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:38:11.694365  592646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:38:11.787494  592646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:38:11.881328  592646 docker.go:233] disabling docker service ...
	I1213 13:38:11.881377  592646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:38:11.899757  592646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:38:11.912516  592646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:38:12.075668  592646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:38:12.306212  592646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:38:12.323703  592646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:38:12.341408  592646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 13:38:12.341476  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.357829  592646 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:38:12.357903  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.369634  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.382167  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.395387  592646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:38:12.407028  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.419575  592646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.437282  592646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:12.449552  592646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:38:12.459306  592646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:38:12.468360  592646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:12.553600  592646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:38:12.681007  592646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:38:12.681171  592646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:38:12.685685  592646 start.go:563] Will wait 60s for crictl version
	I1213 13:38:12.685740  592646 ssh_runner.go:195] Run: which crictl
	I1213 13:38:12.689825  592646 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:38:12.738702  592646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 13:38:12.738800  592646 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.776313  592646 ssh_runner.go:195] Run: crio --version
	I1213 13:38:12.829611  592646 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1213 13:38:12.433366  593655 cli_runner.go:164] Run: docker network inspect pause-484783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:38:12.452312  593655 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1213 13:38:12.456935  593655 kubeadm.go:884] updating cluster {Name:pause-484783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:38:12.457208  593655 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:38:12.457276  593655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.497444  593655 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.497478  593655 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:38:12.497545  593655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.535170  593655 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.535196  593655 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:38:12.535204  593655 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1213 13:38:12.535312  593655 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-484783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-484783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:38:12.535383  593655 ssh_runner.go:195] Run: crio config
	I1213 13:38:12.590097  593655 cni.go:84] Creating CNI manager for ""
	I1213 13:38:12.590118  593655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:38:12.590136  593655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:38:12.590161  593655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-484783 NodeName:pause-484783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:38:12.590310  593655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-484783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:38:12.590386  593655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:38:12.600067  593655 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:38:12.600144  593655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:38:12.608954  593655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 13:38:12.629091  593655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:38:12.643583  593655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1213 13:38:12.658522  593655 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:38:12.663654  593655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:12.789009  593655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:38:12.807145  593655 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783 for IP: 192.168.103.2
	I1213 13:38:12.807166  593655 certs.go:195] generating shared ca certs ...
	I1213 13:38:12.807183  593655 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:12.807370  593655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:38:12.807407  593655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:38:12.807417  593655 certs.go:257] generating profile certs ...
	I1213 13:38:12.807675  593655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.key
	I1213 13:38:12.807747  593655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/apiserver.key.8b604a96
	I1213 13:38:12.807815  593655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/proxy-client.key
	I1213 13:38:12.808022  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:38:12.808061  593655 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:38:12.808075  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:38:12.808098  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:38:12.808126  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:38:12.808145  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:38:12.808189  593655 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:12.808972  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:38:12.832132  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:38:12.852770  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:38:12.872982  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:38:12.892808  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 13:38:12.914520  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:38:12.934505  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:38:12.953400  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 13:38:12.972581  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:38:12.993724  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:38:13.012062  593655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:38:13.032418  593655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:38:13.047037  593655 ssh_runner.go:195] Run: openssl version
	I1213 13:38:13.053917  593655 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.063099  593655 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:38:13.071453  593655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.076510  593655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.076584  593655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:38:13.126141  593655 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:38:13.134735  593655 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.142828  593655 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:38:13.151894  593655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.156075  593655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.156137  593655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:38:13.202944  593655 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:38:13.211144  593655 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.219943  593655 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:38:13.230304  593655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.234753  593655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.234843  593655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:13.283842  593655 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:38:13.293053  593655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:38:13.297757  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:38:13.333861  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:38:13.371271  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:38:13.418018  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:38:13.453976  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:38:13.491408  593655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:38:13.527945  593655 kubeadm.go:401] StartCluster: {Name:pause-484783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-484783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:38:13.528063  593655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:38:13.528117  593655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:38:13.560008  593655 cri.go:89] found id: "b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14"
	I1213 13:38:13.560032  593655 cri.go:89] found id: "b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17"
	I1213 13:38:13.560038  593655 cri.go:89] found id: "a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88"
	I1213 13:38:13.560043  593655 cri.go:89] found id: "ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01"
	I1213 13:38:13.560047  593655 cri.go:89] found id: "9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e"
	I1213 13:38:13.560052  593655 cri.go:89] found id: "d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195"
	I1213 13:38:13.560057  593655 cri.go:89] found id: "15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac"
	I1213 13:38:13.560061  593655 cri.go:89] found id: "199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88"
	I1213 13:38:13.560068  593655 cri.go:89] found id: ""
	I1213 13:38:13.560123  593655 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:38:13.573838  593655 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:38:13Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:38:13.573910  593655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:38:13.583183  593655 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:38:13.583203  593655 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:38:13.583255  593655 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:38:13.592821  593655 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:38:13.593571  593655 kubeconfig.go:125] found "pause-484783" server: "https://192.168.103.2:8443"
	I1213 13:38:13.594531  593655 kapi.go:59] client config for pause-484783: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.key", CAFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 13:38:13.595100  593655 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 13:38:13.595121  593655 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 13:38:13.595126  593655 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 13:38:13.595131  593655 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 13:38:13.595135  593655 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 13:38:13.595546  593655 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:38:13.603864  593655 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1213 13:38:13.603903  593655 kubeadm.go:602] duration metric: took 20.692602ms to restartPrimaryControlPlane
	I1213 13:38:13.603915  593655 kubeadm.go:403] duration metric: took 75.983303ms to StartCluster
	I1213 13:38:13.603935  593655 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.604013  593655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:38:13.605106  593655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.605332  593655 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:38:13.605399  593655 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:38:13.605643  593655 config.go:182] Loaded profile config "pause-484783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:38:13.615945  593655 out.go:179] * Verifying Kubernetes components...
	I1213 13:38:13.615951  593655 out.go:179] * Enabled addons: 
	I1213 13:38:13.617296  593655 addons.go:530] duration metric: took 11.90665ms for enable addons: enabled=[]
	I1213 13:38:13.617340  593655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:13.747520  593655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:38:13.761638  593655 node_ready.go:35] waiting up to 6m0s for node "pause-484783" to be "Ready" ...
	I1213 13:38:13.771765  593655 node_ready.go:49] node "pause-484783" is "Ready"
	I1213 13:38:13.771827  593655 node_ready.go:38] duration metric: took 10.136223ms for node "pause-484783" to be "Ready" ...
	I1213 13:38:13.771843  593655 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:38:13.771899  593655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:38:13.783651  593655 api_server.go:72] duration metric: took 178.284197ms to wait for apiserver process to appear ...
	I1213 13:38:13.783678  593655 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:38:13.783706  593655 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1213 13:38:13.788696  593655 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1213 13:38:13.789700  593655 api_server.go:141] control plane version: v1.34.2
	I1213 13:38:13.789730  593655 api_server.go:131] duration metric: took 6.044088ms to wait for apiserver health ...
	I1213 13:38:13.789741  593655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:38:13.793630  593655 system_pods.go:59] 8 kube-system pods found
	I1213 13:38:13.793655  593655 system_pods.go:61] "coredns-66bc5c9577-5fv2k" [1e6a39a0-3609-42c3-8532-b6f8ceffda42] Running
	I1213 13:38:13.793660  593655 system_pods.go:61] "coredns-66bc5c9577-t7b79" [019ff04f-8424-4ee8-954f-eb1c487771ce] Running
	I1213 13:38:13.793664  593655 system_pods.go:61] "etcd-pause-484783" [5b4441d6-1c6a-4fed-89f8-7ca5b757902e] Running
	I1213 13:38:13.793667  593655 system_pods.go:61] "kindnet-lr5xb" [4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20] Running
	I1213 13:38:13.793671  593655 system_pods.go:61] "kube-apiserver-pause-484783" [f06441a2-a912-4ea8-9ff6-840d8d955998] Running
	I1213 13:38:13.793675  593655 system_pods.go:61] "kube-controller-manager-pause-484783" [5c7ba362-ed69-4ec3-8e13-815fc922a278] Running
	I1213 13:38:13.793679  593655 system_pods.go:61] "kube-proxy-kn5hh" [9fd4100d-49a1-4083-b901-4f4583d22ac0] Running
	I1213 13:38:13.793682  593655 system_pods.go:61] "kube-scheduler-pause-484783" [a91afa15-1576-461c-8806-66dd0f5b9209] Running
	I1213 13:38:13.793687  593655 system_pods.go:74] duration metric: took 3.939525ms to wait for pod list to return data ...
	I1213 13:38:13.793693  593655 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:38:13.795586  593655 default_sa.go:45] found service account: "default"
	I1213 13:38:13.795619  593655 default_sa.go:55] duration metric: took 1.903411ms for default service account to be created ...
	I1213 13:38:13.795629  593655 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:38:13.798236  593655 system_pods.go:86] 8 kube-system pods found
	I1213 13:38:13.798264  593655 system_pods.go:89] "coredns-66bc5c9577-5fv2k" [1e6a39a0-3609-42c3-8532-b6f8ceffda42] Running
	I1213 13:38:13.798273  593655 system_pods.go:89] "coredns-66bc5c9577-t7b79" [019ff04f-8424-4ee8-954f-eb1c487771ce] Running
	I1213 13:38:13.798279  593655 system_pods.go:89] "etcd-pause-484783" [5b4441d6-1c6a-4fed-89f8-7ca5b757902e] Running
	I1213 13:38:13.798284  593655 system_pods.go:89] "kindnet-lr5xb" [4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20] Running
	I1213 13:38:13.798293  593655 system_pods.go:89] "kube-apiserver-pause-484783" [f06441a2-a912-4ea8-9ff6-840d8d955998] Running
	I1213 13:38:13.798299  593655 system_pods.go:89] "kube-controller-manager-pause-484783" [5c7ba362-ed69-4ec3-8e13-815fc922a278] Running
	I1213 13:38:13.798308  593655 system_pods.go:89] "kube-proxy-kn5hh" [9fd4100d-49a1-4083-b901-4f4583d22ac0] Running
	I1213 13:38:13.798313  593655 system_pods.go:89] "kube-scheduler-pause-484783" [a91afa15-1576-461c-8806-66dd0f5b9209] Running
	I1213 13:38:13.798321  593655 system_pods.go:126] duration metric: took 2.681313ms to wait for k8s-apps to be running ...
	I1213 13:38:13.798333  593655 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:38:13.798386  593655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:38:13.811491  593655 system_svc.go:56] duration metric: took 13.14952ms WaitForService to wait for kubelet
	I1213 13:38:13.811518  593655 kubeadm.go:587] duration metric: took 206.157232ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:38:13.811540  593655 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:38:13.814176  593655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:38:13.814216  593655 node_conditions.go:123] node cpu capacity is 8
	I1213 13:38:13.814234  593655 node_conditions.go:105] duration metric: took 2.687845ms to run NodePressure ...
	I1213 13:38:13.814250  593655 start.go:242] waiting for startup goroutines ...
	I1213 13:38:13.814261  593655 start.go:247] waiting for cluster config update ...
	I1213 13:38:13.814272  593655 start.go:256] writing updated cluster config ...
	I1213 13:38:13.814634  593655 ssh_runner.go:195] Run: rm -f paused
	I1213 13:38:13.818426  593655 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:38:13.819005  593655 kapi.go:59] client config for pause-484783: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/profiles/pause-484783/client.key", CAFile:"/home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 13:38:13.821755  593655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5fv2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.826281  593655 pod_ready.go:94] pod "coredns-66bc5c9577-5fv2k" is "Ready"
	I1213 13:38:13.826308  593655 pod_ready.go:86] duration metric: took 4.507585ms for pod "coredns-66bc5c9577-5fv2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.826319  593655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t7b79" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.830514  593655 pod_ready.go:94] pod "coredns-66bc5c9577-t7b79" is "Ready"
	I1213 13:38:13.830537  593655 pod_ready.go:86] duration metric: took 4.210977ms for pod "coredns-66bc5c9577-t7b79" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.832409  593655 pod_ready.go:83] waiting for pod "etcd-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.836262  593655 pod_ready.go:94] pod "etcd-pause-484783" is "Ready"
	I1213 13:38:13.836283  593655 pod_ready.go:86] duration metric: took 3.856519ms for pod "etcd-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:13.838012  593655 pod_ready.go:83] waiting for pod "kube-apiserver-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:12.831415  592646 cli_runner.go:164] Run: docker network inspect stopped-upgrade-627277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:38:12.852704  592646 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 13:38:12.857031  592646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:38:12.870140  592646 kubeadm.go:883] updating cluster {Name:stopped-upgrade-627277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-627277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:38:12.870286  592646 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 13:38:12.870355  592646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.957362  592646 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.957382  592646 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:38:12.957440  592646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:38:12.996286  592646 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:38:12.996299  592646 cache_images.go:84] Images are preloaded, skipping loading
	I1213 13:38:12.996307  592646 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.0 crio true true} ...
	I1213 13:38:12.996396  592646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-627277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-627277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:38:12.996460  592646 ssh_runner.go:195] Run: crio config
	I1213 13:38:13.044192  592646 cni.go:84] Creating CNI manager for ""
	I1213 13:38:13.044209  592646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:38:13.044222  592646 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 13:38:13.044253  592646 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-627277 NodeName:stopped-upgrade-627277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:38:13.044449  592646 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-627277"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:38:13.044531  592646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1213 13:38:13.055329  592646 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 13:38:13.055393  592646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:38:13.065282  592646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 13:38:13.084364  592646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:38:13.105965  592646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1213 13:38:13.126860  592646 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:38:13.131572  592646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:38:13.144127  592646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:13.223929  592646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:38:13.247791  592646 certs.go:68] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277 for IP: 192.168.85.2
	I1213 13:38:13.247808  592646 certs.go:194] generating shared ca certs ...
	I1213 13:38:13.247830  592646 certs.go:226] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.247993  592646 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:38:13.248040  592646 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:38:13.248046  592646 certs.go:256] generating profile certs ...
	I1213 13:38:13.248138  592646 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key
	I1213 13:38:13.248152  592646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt with IP's: []
	I1213 13:38:13.537510  592646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt ...
	I1213 13:38:13.537529  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt: {Name:mk869752f878e37fe04e50ccc342273c02967a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.537706  592646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key ...
	I1213 13:38:13.537717  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key: {Name:mk87c0f9f176710f4412fd732d5cc61d024a4440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.537862  592646 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589
	I1213 13:38:13.537877  592646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 13:38:13.640950  592646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589 ...
	I1213 13:38:13.640968  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589: {Name:mkd1d7a248ab81559206b0df2067477218dcb7be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.641113  592646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589 ...
	I1213 13:38:13.641121  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589: {Name:mkaeb313542685a81d95cde202a259e441c6c1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.641196  592646 certs.go:381] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt.b4da4589 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt
	I1213 13:38:13.641262  592646 certs.go:385] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key.b4da4589 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key
	I1213 13:38:13.641308  592646 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key
	I1213 13:38:13.641319  592646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt with IP's: []
	I1213 13:38:13.895634  592646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt ...
	I1213 13:38:13.895651  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt: {Name:mk607aedb5ba13e5ac459a51d2799265d0c01fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.895851  592646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key ...
	I1213 13:38:13.895861  592646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key: {Name:mk1dfdf1437138721eeca3ac125ce29414725d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:38:13.896107  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:38:13.896143  592646 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:38:13.896156  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:38:13.896178  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:38:13.896196  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:38:13.896219  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:38:13.896255  592646 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:13.896995  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:38:13.923164  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:38:13.947713  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:38:13.972159  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:38:13.996923  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 13:38:14.020884  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:38:14.046975  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:38:14.072090  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 13:38:14.098114  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:38:14.130405  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:38:14.160044  592646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:38:14.188863  592646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:38:14.210034  592646 ssh_runner.go:195] Run: openssl version
	I1213 13:38:14.215663  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 13:38:14.226130  592646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:14.229920  592646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:14.229962  592646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:38:14.236899  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 13:38:14.246638  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/394130.pem && ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem"
	I1213 13:38:14.256647  592646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:38:14.260905  592646 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:38:14.260957  592646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:38:14.267822  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0"
	I1213 13:38:14.278071  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3941302.pem && ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem"
	I1213 13:38:14.290132  592646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:38:14.293998  592646 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:38:14.294067  592646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:38:14.300995  592646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0"
	I1213 13:38:14.311377  592646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:38:14.315281  592646 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:38:14.315349  592646 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-627277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-627277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:38:14.315447  592646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:38:14.315526  592646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:38:14.355748  592646 cri.go:89] found id: ""
	I1213 13:38:14.355850  592646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:38:14.365959  592646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:38:14.376620  592646 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:38:14.376674  592646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:38:14.387243  592646 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:38:14.387260  592646 kubeadm.go:157] found existing configuration files:
	
	I1213 13:38:14.387311  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:38:14.397268  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:38:14.397329  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:38:14.406254  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:38:14.415285  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:38:14.415335  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:38:14.424347  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:38:14.433596  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:38:14.433641  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:38:14.442606  592646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:38:14.451995  592646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:38:14.452043  592646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:38:14.460700  592646 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:38:14.520030  592646 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:38:14.575326  592646 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:38:14.023345  593655 pod_ready.go:94] pod "kube-apiserver-pause-484783" is "Ready"
	I1213 13:38:14.023372  593655 pod_ready.go:86] duration metric: took 185.338795ms for pod "kube-apiserver-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:14.222185  593655 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:14.623294  593655 pod_ready.go:94] pod "kube-controller-manager-pause-484783" is "Ready"
	I1213 13:38:14.623327  593655 pod_ready.go:86] duration metric: took 401.115125ms for pod "kube-controller-manager-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:14.822449  593655 pod_ready.go:83] waiting for pod "kube-proxy-kn5hh" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.222275  593655 pod_ready.go:94] pod "kube-proxy-kn5hh" is "Ready"
	I1213 13:38:15.222306  593655 pod_ready.go:86] duration metric: took 399.827535ms for pod "kube-proxy-kn5hh" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.422486  593655 pod_ready.go:83] waiting for pod "kube-scheduler-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.822550  593655 pod_ready.go:94] pod "kube-scheduler-pause-484783" is "Ready"
	I1213 13:38:15.822584  593655 pod_ready.go:86] duration metric: took 400.071657ms for pod "kube-scheduler-pause-484783" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:38:15.822597  593655 pod_ready.go:40] duration metric: took 2.004130124s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:38:15.871402  593655 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:38:16.037568  593655 out.go:179] * Done! kubectl is now configured to use "pause-484783" cluster and "default" namespace by default
	I1213 13:38:12.079836  595557 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:38:12.080133  595557 start.go:159] libmachine.API.Create for "missing-upgrade-533439" (driver="docker")
	I1213 13:38:12.080174  595557 client.go:168] LocalClient.Create starting
	I1213 13:38:12.080290  595557 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:38:12.080335  595557 main.go:141] libmachine: Decoding PEM data...
	I1213 13:38:12.080354  595557 main.go:141] libmachine: Parsing certificate...
	I1213 13:38:12.080421  595557 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:38:12.080443  595557 main.go:141] libmachine: Decoding PEM data...
	I1213 13:38:12.080455  595557 main.go:141] libmachine: Parsing certificate...
	I1213 13:38:12.080998  595557 cli_runner.go:164] Run: docker network inspect missing-upgrade-533439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:38:12.099189  595557 cli_runner.go:211] docker network inspect missing-upgrade-533439 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:38:12.099259  595557 network_create.go:284] running [docker network inspect missing-upgrade-533439] to gather additional debugging logs...
	I1213 13:38:12.099271  595557 cli_runner.go:164] Run: docker network inspect missing-upgrade-533439
	W1213 13:38:12.114991  595557 cli_runner.go:211] docker network inspect missing-upgrade-533439 returned with exit code 1
	I1213 13:38:12.115014  595557 network_create.go:287] error running [docker network inspect missing-upgrade-533439]: docker network inspect missing-upgrade-533439: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-533439 not found
	I1213 13:38:12.115047  595557 network_create.go:289] output of [docker network inspect missing-upgrade-533439]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-533439 not found
	
	** /stderr **
	I1213 13:38:12.115174  595557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:38:12.132586  595557 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:38:12.134001  595557 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:38:12.135003  595557 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:38:12.136502  595557 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001faa2a0}
	I1213 13:38:12.136537  595557 network_create.go:124] attempt to create docker network missing-upgrade-533439 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:38:12.136604  595557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-533439 missing-upgrade-533439
	I1213 13:38:12.193451  595557 network_create.go:108] docker network missing-upgrade-533439 192.168.76.0/24 created
	I1213 13:38:12.193479  595557 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-533439" container
	I1213 13:38:12.193543  595557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:38:12.212872  595557 cli_runner.go:164] Run: docker volume create missing-upgrade-533439 --label name.minikube.sigs.k8s.io=missing-upgrade-533439 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:38:12.232265  595557 oci.go:103] Successfully created a docker volume missing-upgrade-533439
	I1213 13:38:12.232333  595557 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-533439-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-533439 --entrypoint /usr/bin/test -v missing-upgrade-533439:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1213 13:38:12.645625  595557 oci.go:107] Successfully prepared a docker volume missing-upgrade-533439
	I1213 13:38:12.645682  595557 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1213 13:38:12.645706  595557 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:38:12.645857  595557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-533439:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:38:17.173566  595557 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-533439:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.527666294s)
	I1213 13:38:17.173624  595557 kic.go:203] duration metric: took 4.527913719s to extract preloaded images to volume ...
	W1213 13:38:17.173731  595557 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:38:17.173799  595557 oci.go:249] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:38:17.173855  595557 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:38:17.237932  595557 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-533439 --name missing-upgrade-533439 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-533439 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-533439 --network missing-upgrade-533439 --ip 192.168.76.2 --volume missing-upgrade-533439:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1213 13:38:17.538660  595557 cli_runner.go:164] Run: docker container inspect missing-upgrade-533439 --format={{.State.Running}}
	I1213 13:38:17.557373  595557 cli_runner.go:164] Run: docker container inspect missing-upgrade-533439 --format={{.State.Status}}
	I1213 13:38:17.576638  595557 cli_runner.go:164] Run: docker exec missing-upgrade-533439 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:38:17.621513  595557 oci.go:144] the created container "missing-upgrade-533439" has a running status.
	I1213 13:38:17.621560  595557 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa...
	I1213 13:38:17.889645  595557 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:38:17.915246  595557 cli_runner.go:164] Run: docker container inspect missing-upgrade-533439 --format={{.State.Status}}
	I1213 13:38:17.933306  595557 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:38:17.933319  595557 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-533439 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:38:17.993647  595557 cli_runner.go:164] Run: docker container inspect missing-upgrade-533439 --format={{.State.Status}}
	I1213 13:38:18.013370  595557 machine.go:93] provisionDockerMachine start ...
	I1213 13:38:18.013484  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:18.030733  595557 main.go:141] libmachine: Using SSH client type: native
	I1213 13:38:18.030996  595557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1213 13:38:18.031006  595557 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 13:38:18.158897  595557 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-533439
	
	I1213 13:38:18.158921  595557 ubuntu.go:169] provisioning hostname "missing-upgrade-533439"
	I1213 13:38:18.158986  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:18.181816  595557 main.go:141] libmachine: Using SSH client type: native
	I1213 13:38:18.182059  595557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1213 13:38:18.182069  595557 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-533439 && echo "missing-upgrade-533439" | sudo tee /etc/hostname
	I1213 13:38:18.341192  595557 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-533439
	
	I1213 13:38:18.341273  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:18.365070  595557 main.go:141] libmachine: Using SSH client type: native
	I1213 13:38:18.365335  595557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1213 13:38:18.365359  595557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-533439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-533439/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-533439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:38:18.498325  595557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:38:18.498346  595557 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:38:18.498384  595557 ubuntu.go:177] setting up certificates
	I1213 13:38:18.498397  595557 provision.go:84] configureAuth start
	I1213 13:38:18.498458  595557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-533439
	I1213 13:38:18.519955  595557 provision.go:143] copyHostCerts
	I1213 13:38:18.520014  595557 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:38:18.520025  595557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:38:18.520119  595557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:38:18.520228  595557 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:38:18.520234  595557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:38:18.520267  595557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:38:18.520332  595557 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:38:18.520336  595557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:38:18.520366  595557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:38:18.520430  595557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-533439 san=[127.0.0.1 192.168.76.2 localhost minikube missing-upgrade-533439]
	I1213 13:38:18.697478  595557 provision.go:177] copyRemoteCerts
	I1213 13:38:18.697546  595557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:38:18.697598  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:18.719716  595557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa Username:docker}
	I1213 13:38:18.822037  595557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:38:18.853569  595557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 13:38:18.888110  595557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:38:18.920315  595557 provision.go:87] duration metric: took 421.901987ms to configureAuth
	I1213 13:38:18.920338  595557 ubuntu.go:193] setting minikube options for container-runtime
	I1213 13:38:18.920582  595557 config.go:182] Loaded profile config "missing-upgrade-533439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:38:18.920707  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:18.945425  595557 main.go:141] libmachine: Using SSH client type: native
	I1213 13:38:18.945693  595557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 127.0.0.1 33386 <nil> <nil>}
	I1213 13:38:18.945708  595557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:38:19.209711  595557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:38:19.209734  595557 machine.go:96] duration metric: took 1.196346689s to provisionDockerMachine
	I1213 13:38:19.209752  595557 client.go:171] duration metric: took 7.129565515s to LocalClient.Create
	I1213 13:38:19.209800  595557 start.go:167] duration metric: took 7.129640894s to libmachine.API.Create "missing-upgrade-533439"
	I1213 13:38:19.209809  595557 start.go:293] postStartSetup for "missing-upgrade-533439" (driver="docker")
	I1213 13:38:19.209822  595557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:38:19.209894  595557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:38:19.209941  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:19.230112  595557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa Username:docker}
	I1213 13:38:19.335054  595557 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:38:19.338488  595557 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:38:19.338519  595557 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 13:38:19.338525  595557 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 13:38:19.338531  595557 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 13:38:19.338541  595557 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:38:19.338600  595557 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:38:19.338702  595557 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:38:19.338864  595557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:38:19.349289  595557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:38:19.380366  595557 start.go:296] duration metric: took 170.53958ms for postStartSetup
	I1213 13:38:19.380892  595557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-533439
	I1213 13:38:19.408246  595557 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/missing-upgrade-533439/config.json ...
	I1213 13:38:19.408510  595557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:38:19.408554  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:19.430487  595557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa Username:docker}
	I1213 13:38:19.525433  595557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:38:19.534095  595557 start.go:128] duration metric: took 7.512584825s to createHost
	I1213 13:38:19.534114  595557 start.go:83] releasing machines lock for "missing-upgrade-533439", held for 7.512808791s
	I1213 13:38:19.534195  595557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-533439
	I1213 13:38:19.554085  595557 ssh_runner.go:195] Run: cat /version.json
	I1213 13:38:19.554124  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:19.554168  595557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:38:19.554256  595557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-533439
	I1213 13:38:19.573376  595557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa Username:docker}
	I1213 13:38:19.574657  595557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33386 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/missing-upgrade-533439/id_rsa Username:docker}
	I1213 13:38:19.666296  595557 ssh_runner.go:195] Run: systemctl --version
	I1213 13:38:19.760086  595557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:38:19.923075  595557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 13:38:19.928034  595557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:19.955095  595557 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 13:38:19.955172  595557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:38:19.991577  595557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 13:38:19.991609  595557 start.go:495] detecting cgroup driver to use...
	I1213 13:38:19.991643  595557 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:38:19.991717  595557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:38:20.009156  595557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:38:20.021870  595557 docker.go:217] disabling cri-docker service (if available) ...
	I1213 13:38:20.021921  595557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:38:20.035465  595557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:38:20.053002  595557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:38:20.130531  595557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:38:20.207926  595557 docker.go:233] disabling docker service ...
	I1213 13:38:20.207984  595557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:38:20.225582  595557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:38:20.238285  595557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:38:20.313650  595557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:38:20.525705  595557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:38:20.539367  595557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:38:20.557341  595557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 13:38:20.557382  595557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.570579  595557 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:38:20.570644  595557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.581604  595557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.593630  595557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.606875  595557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:38:20.617664  595557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.629396  595557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.648946  595557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:38:20.660737  595557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:38:20.671972  595557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:38:20.682854  595557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:38:20.770150  595557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:38:20.884731  595557 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:38:20.884857  595557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:38:20.890381  595557 start.go:563] Will wait 60s for crictl version
	I1213 13:38:20.890451  595557 ssh_runner.go:195] Run: which crictl
	I1213 13:38:20.895175  595557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 13:38:20.940404  595557 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 13:38:20.940511  595557 ssh_runner.go:195] Run: crio --version
	I1213 13:38:20.985495  595557 ssh_runner.go:195] Run: crio --version
	I1213 13:38:21.032168  595557 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	
	
	==> CRI-O <==
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.231764963Z" level=info msg="Conmon does support the --sync option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.231808825Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.231823535Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.232576661Z" level=info msg="Conmon does support the --sync option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.232600859Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.23675402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.236783453Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.237273036Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.237669324Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.237714669Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.314718234Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-t7b79 Namespace:kube-system ID:8a5f1aacc4bcc505676ea3a5a8af63bfe2c975687cbed8be179eda960b966327 UID:019ff04f-8424-4ee8-954f-eb1c487771ce NetNS:/var/run/netns/30fe85d5-55f9-4ca1-9c28-231814ce4081 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005983f0}] Aliases:map[]}"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.314954643Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-t7b79 for CNI network kindnet (type=ptp)"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.315360905Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-5fv2k Namespace:kube-system ID:d5cc912d29fe6e432b9cc575cd37322bf0c4481831e94c7c60e0ba392501fff1 UID:1e6a39a0-3609-42c3-8532-b6f8ceffda42 NetNS:/var/run/netns/63647a29-d47b-454d-ba10-2c0f00ba22a3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000598550}] Aliases:map[]}"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.315525832Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-5fv2k for CNI network kindnet (type=ptp)"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316408194Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316436775Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316512534Z" level=info msg="Create NRI interface"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316722259Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316745925Z" level=info msg="runtime interface created"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316765332Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316797993Z" level=info msg="runtime interface starting up..."
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316807275Z" level=info msg="starting plugins..."
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.316833057Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 13:38:12 pause-484783 crio[2199]: time="2025-12-13T13:38:12.317414439Z" level=info msg="No systemd watchdog enabled"
	Dec 13 13:38:12 pause-484783 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b7a079362a05a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago       Running             coredns                   0                   8a5f1aacc4bcc       coredns-66bc5c9577-t7b79               kube-system
	b12706571c86e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago       Running             coredns                   0                   d5cc912d29fe6       coredns-66bc5c9577-5fv2k               kube-system
	a86b7f66e2ebc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   57 seconds ago       Running             kindnet-cni               0                   ac18ce2a45e1f       kindnet-lr5xb                          kube-system
	ac41f2f2f9b1b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   57 seconds ago       Running             kube-proxy                0                   44da173cdf9bf       kube-proxy-kn5hh                       kube-system
	9c7bf796178c3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   edf0101a9d385       etcd-pause-484783                      kube-system
	d3b97f25c25e6       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   8e4a39d30480e       kube-scheduler-pause-484783            kube-system
	15061cf1e0286       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   b5d6ee19724e7       kube-controller-manager-pause-484783   kube-system
	199ba1838f628       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   b0bda7c8295df       kube-apiserver-pause-484783            kube-system
	
	
	==> coredns [b12706571c86e31586b14584ea7da146350691dcdca5e35260ce4e8e2451dd17] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35787 - 5482 "HINFO IN 8757055889704573740.2148341591570753963. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.097271417s
	
	
	==> coredns [b7a079362a05a33a6c08a026193aa38711129117e8344e3afb4881a090d24a14] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33401 - 43482 "HINFO IN 1170256902137577025.8256869183820981561. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.079908291s
	
	
	==> describe nodes <==
	Name:               pause-484783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-484783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=pause-484783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_37_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:37:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-484783
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:38:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:37:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:37:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:37:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:38:09 +0000   Sat, 13 Dec 2025 13:38:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-484783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                155ba732-0e2c-4b6e-8ad1-6aafd2a9edb7
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5fv2k                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     58s
	  kube-system                 coredns-66bc5c9577-t7b79                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     58s
	  kube-system                 etcd-pause-484783                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         65s
	  kube-system                 kindnet-lr5xb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-pause-484783             250m (3%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-pause-484783    200m (2%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-kn5hh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-pause-484783             100m (1%)     0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node pause-484783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node pause-484783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node pause-484783 status is now: NodeHasSufficientPID
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s                kubelet          Node pause-484783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s                kubelet          Node pause-484783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s                kubelet          Node pause-484783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                node-controller  Node pause-484783 event: Registered Node pause-484783 in Controller
	  Normal  NodeReady                17s                kubelet          Node pause-484783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea b7 dd 32 fb 08 08 06
	[  +0.000396] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be c4 f7 a4 8d 16 08 06
	[Dec13 13:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.009708] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.024845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.022879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +1.024907] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +2.047757] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +4.030610] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000034] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[  +8.255132] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[ +16.382284] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	[Dec13 13:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7e d9 9d ab cc be 22 40 37 73 82 fe 08 00
	
	
	==> etcd [9c7bf796178c3a16afac713c7182399638dd0c8cf1ff2a54bcf6a4c0c606997e] <==
	{"level":"info","ts":"2025-12-13T13:37:24.351403Z","caller":"traceutil/trace.go:172","msg":"trace[1328650903] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"165.767063ms","start":"2025-12-13T13:37:24.185627Z","end":"2025-12-13T13:37:24.351394Z","steps":["trace[1328650903] 'process raft request'  (duration: 165.713566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.351435Z","caller":"traceutil/trace.go:172","msg":"trace[114704394] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"322.744746ms","start":"2025-12-13T13:37:24.028682Z","end":"2025-12-13T13:37:24.351426Z","steps":["trace[114704394] 'process raft request'  (duration: 322.44624ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:37:24.351484Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.765333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351468Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.027163Z","time spent":"324.011467ms","remote":"127.0.0.1:45712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":697,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:640 >> failure:<>"}
	{"level":"info","ts":"2025-12-13T13:37:24.351515Z","caller":"traceutil/trace.go:172","msg":"trace[1356278954] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:349; }","duration":"188.804034ms","start":"2025-12-13T13:37:24.162703Z","end":"2025-12-13T13:37:24.351507Z","steps":["trace[1356278954] 'agreement among raft nodes before linearized reading'  (duration: 188.703078ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.351560Z","caller":"traceutil/trace.go:172","msg":"trace[2001224177] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"223.572701ms","start":"2025-12-13T13:37:24.127977Z","end":"2025-12-13T13:37:24.351550Z","steps":["trace[2001224177] 'process raft request'  (duration: 223.23447ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:37:24.351650Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.028663Z","time spent":"322.787414ms","remote":"127.0.0.1:46776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":899,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:111 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:863 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351657Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.029128Z","time spent":"322.259876ms","remote":"127.0.0.1:46776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2194,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:113 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:2159 >> failure:<request_range:<key:\"/registry/clusterroles/view\" > >"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351715Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.482531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-484783\" limit:1 ","response":"range_response_count:1 size:7498"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351729Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.028609Z","time spent":"322.798364ms","remote":"127.0.0.1:46776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2299,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/edit\" mod_revision:112 > success:<request_put:<key:\"/registry/clusterroles/edit\" value_size:2264 >> failure:<request_range:<key:\"/registry/clusterroles/edit\" > >"}
	{"level":"info","ts":"2025-12-13T13:37:24.351756Z","caller":"traceutil/trace.go:172","msg":"trace[379070677] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-484783; range_end:; response_count:1; response_revision:349; }","duration":"205.525624ms","start":"2025-12-13T13:37:24.146221Z","end":"2025-12-13T13:37:24.351746Z","steps":["trace[379070677] 'agreement among raft nodes before linearized reading'  (duration: 205.393425ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:37:24.351841Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.027232Z","time spent":"323.984412ms","remote":"127.0.0.1:46514","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":805,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/servicecidrs/kubernetes\" mod_revision:16 > success:<request_put:<key:\"/registry/servicecidrs/kubernetes\" value_size:764 >> failure:<request_range:<key:\"/registry/servicecidrs/kubernetes\" > >"}
	{"level":"warn","ts":"2025-12-13T13:37:24.351922Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.027282Z","time spent":"323.928561ms","remote":"127.0.0.1:47386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3726,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kindnet-78f866cbfd\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kindnet-78f866cbfd\" value_size:3658 >> failure:<>"}
	{"level":"info","ts":"2025-12-13T13:37:24.351120Z","caller":"traceutil/trace.go:172","msg":"trace[1291544517] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:340; }","duration":"238.984308ms","start":"2025-12-13T13:37:24.112119Z","end":"2025-12-13T13:37:24.351104Z","steps":["trace[1291544517] 'agreement among raft nodes before linearized reading'  (duration: 153.536622ms)","trace[1291544517] 'range keys from in-memory index tree'  (duration: 85.19425ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:37:24.351964Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T13:37:24.028969Z","time spent":"322.36154ms","remote":"127.0.0.1:47386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-66d5f8d6f6\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-66d5f8d6f6\" value_size:2031 >> failure:<>"}
	{"level":"warn","ts":"2025-12-13T13:37:24.717553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.331015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-12-13T13:37:24.717618Z","caller":"traceutil/trace.go:172","msg":"trace[387483988] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:374; }","duration":"203.399223ms","start":"2025-12-13T13:37:24.514203Z","end":"2025-12-13T13:37:24.717602Z","steps":["trace[387483988] 'agreement among raft nodes before linearized reading'  (duration: 143.134741ms)","trace[387483988] 'range keys from in-memory index tree'  (duration: 60.080872ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:37:24.717550Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.224309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-kn5hh\" limit:1 ","response":"range_response_count:1 size:3429"}
	{"level":"info","ts":"2025-12-13T13:37:24.717703Z","caller":"traceutil/trace.go:172","msg":"trace[1441952844] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-kn5hh; range_end:; response_count:1; response_revision:374; }","duration":"203.38692ms","start":"2025-12-13T13:37:24.514302Z","end":"2025-12-13T13:37:24.717689Z","steps":["trace[1441952844] 'agreement among raft nodes before linearized reading'  (duration: 143.059443ms)","trace[1441952844] 'range keys from in-memory index tree'  (duration: 60.08121ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:37:24.717714Z","caller":"traceutil/trace.go:172","msg":"trace[11575817] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"194.659378ms","start":"2025-12-13T13:37:24.523044Z","end":"2025-12-13T13:37:24.717703Z","steps":["trace[11575817] 'process raft request'  (duration: 194.616263ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.717840Z","caller":"traceutil/trace.go:172","msg":"trace[2139430813] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"203.484711ms","start":"2025-12-13T13:37:24.514345Z","end":"2025-12-13T13:37:24.717829Z","steps":["trace[2139430813] 'process raft request'  (duration: 143.044553ms)","trace[2139430813] 'compare'  (duration: 59.981007ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:37:24.717997Z","caller":"traceutil/trace.go:172","msg":"trace[443948987] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"200.951047ms","start":"2025-12-13T13:37:24.517037Z","end":"2025-12-13T13:37:24.717988Z","steps":["trace[443948987] 'process raft request'  (duration: 200.553967ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.717854Z","caller":"traceutil/trace.go:172","msg":"trace[932506412] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"200.739931ms","start":"2025-12-13T13:37:24.517103Z","end":"2025-12-13T13:37:24.717843Z","steps":["trace[932506412] 'process raft request'  (duration: 200.529833ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:24.724615Z","caller":"traceutil/trace.go:172","msg":"trace[1883250059] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"128.724544ms","start":"2025-12-13T13:37:24.595749Z","end":"2025-12-13T13:37:24.724474Z","steps":["trace[1883250059] 'process raft request'  (duration: 128.536164ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:37:56.893837Z","caller":"traceutil/trace.go:172","msg":"trace[290670254] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"174.713526ms","start":"2025-12-13T13:37:56.719106Z","end":"2025-12-13T13:37:56.893819Z","steps":["trace[290670254] 'process raft request'  (duration: 174.498271ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:38:22 up  2:20,  0 user,  load average: 3.37, 1.99, 1.53
	Linux pause-484783 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a86b7f66e2ebc03c4274e16df61bd2f021f1bd1120855d661801ec9c61029a88] <==
	I1213 13:37:25.080954       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:37:25.081240       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 13:37:25.081400       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:37:25.081415       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:37:25.081424       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:37:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:37:25.381240       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:37:25.381284       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:37:25.381299       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:37:25.381604       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 13:37:55.382930       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 13:37:55.382934       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 13:37:55.382975       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 13:37:55.382904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1213 13:37:56.781700       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:37:56.781739       1 metrics.go:72] Registering metrics
	I1213 13:37:56.781860       1 controller.go:711] "Syncing nftables rules"
	I1213 13:38:05.387963       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:38:05.388034       1 main.go:301] handling current node
	I1213 13:38:15.382141       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:38:15.382221       1 main.go:301] handling current node
	
	
	==> kube-apiserver [199ba1838f628b89e26d7b2703ef401cffa89869e6755f9fc80b4d636b3fdc88] <==
	I1213 13:37:15.805484       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:37:15.805491       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:37:15.805498       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:37:15.807248       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 13:37:15.809539       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:37:15.823028       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:37:15.847788       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:37:15.854304       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:37:16.700597       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 13:37:16.705414       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:37:16.705432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:37:17.224488       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:37:17.266617       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:37:17.409546       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:37:17.416023       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1213 13:37:17.417458       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:37:17.422299       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:37:18.068936       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:37:18.367205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:37:18.375135       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:37:18.383007       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:37:24.026519       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:37:24.125163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:37:24.355014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:37:24.396820       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [15061cf1e02860e17c437138c82b5df9e17b52159da0cdab64b358cbd74510ac] <==
	I1213 13:37:23.065924       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 13:37:23.065848       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:37:23.066087       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 13:37:23.066296       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 13:37:23.067298       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-484783" podCIDRs=["10.244.0.0/24"]
	I1213 13:37:23.067475       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 13:37:23.074894       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 13:37:23.074933       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:37:23.085108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:37:23.114816       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:37:23.114820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:37:23.114910       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:37:23.114926       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:37:23.115010       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 13:37:23.115196       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 13:37:23.116051       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 13:37:23.116072       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 13:37:23.116188       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:37:23.116593       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:37:23.119433       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 13:37:23.121616       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:37:23.125903       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 13:37:23.132361       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 13:37:23.140646       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 13:38:08.073012       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ac41f2f2f9b1bf7cd0fe9a4142241d9e5845d510dd7698a4d9ef37991b4c7c01] <==
	I1213 13:37:24.941196       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:37:25.009674       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:37:25.110258       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:37:25.110302       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 13:37:25.110411       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:37:25.135337       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:37:25.135384       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:37:25.140928       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:37:25.141317       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:37:25.141356       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:37:25.143041       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:37:25.143074       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:37:25.143131       1 config.go:200] "Starting service config controller"
	I1213 13:37:25.143142       1 config.go:309] "Starting node config controller"
	I1213 13:37:25.143153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:37:25.143155       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:37:25.143161       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:37:25.143145       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:37:25.243279       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:37:25.243341       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:37:25.243414       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:37:25.244081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d3b97f25c25e6b8f8c97f9a4d2b4d8d07f26642a7dceb69f6e3a270f5f27f195] <==
	E1213 13:37:15.770148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:37:15.770275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:37:15.770435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:37:15.770547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:37:15.770705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:37:15.770702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:37:15.770727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:37:15.770786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:37:15.770843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:37:15.770444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:37:15.771174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:37:15.771176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:37:15.771284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:37:15.771430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:37:15.771436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:37:15.771429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:37:16.654823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:37:16.729455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:37:16.746865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:37:16.771276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:37:16.826586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:37:16.842736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:37:16.892419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:37:16.950616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1213 13:37:19.767310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.571936    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fd4100d-49a1-4083-b901-4f4583d22ac0-kube-proxy\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.571971    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fd4100d-49a1-4083-b901-4f4583d22ac0-xtables-lock\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.571989    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fd4100d-49a1-4083-b901-4f4583d22ac0-lib-modules\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572002    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-xtables-lock\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572031    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shb6r\" (UniqueName: \"kubernetes.io/projected/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-kube-api-access-shb6r\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572160    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mck98\" (UniqueName: \"kubernetes.io/projected/9fd4100d-49a1-4083-b901-4f4583d22ac0-kube-api-access-mck98\") pod \"kube-proxy-kn5hh\" (UID: \"9fd4100d-49a1-4083-b901-4f4583d22ac0\") " pod="kube-system/kube-proxy-kn5hh"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572203    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-cni-cfg\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:24 pause-484783 kubelet[1308]: I1213 13:37:24.572227    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20-lib-modules\") pod \"kindnet-lr5xb\" (UID: \"4d5ad2a1-01cd-4f82-b31e-f3b1719fdc20\") " pod="kube-system/kindnet-lr5xb"
	Dec 13 13:37:25 pause-484783 kubelet[1308]: I1213 13:37:25.321214    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kn5hh" podStartSLOduration=1.321150401 podStartE2EDuration="1.321150401s" podCreationTimestamp="2025-12-13 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:37:25.32092217 +0000 UTC m=+7.181981914" watchObservedRunningTime="2025-12-13 13:37:25.321150401 +0000 UTC m=+7.182210143"
	Dec 13 13:37:25 pause-484783 kubelet[1308]: I1213 13:37:25.321361    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lr5xb" podStartSLOduration=1.321353269 podStartE2EDuration="1.321353269s" podCreationTimestamp="2025-12-13 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:37:25.29738797 +0000 UTC m=+7.158447712" watchObservedRunningTime="2025-12-13 13:37:25.321353269 +0000 UTC m=+7.182413010"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.806409    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.878545    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84b8\" (UniqueName: \"kubernetes.io/projected/1e6a39a0-3609-42c3-8532-b6f8ceffda42-kube-api-access-k84b8\") pod \"coredns-66bc5c9577-5fv2k\" (UID: \"1e6a39a0-3609-42c3-8532-b6f8ceffda42\") " pod="kube-system/coredns-66bc5c9577-5fv2k"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.878609    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e6a39a0-3609-42c3-8532-b6f8ceffda42-config-volume\") pod \"coredns-66bc5c9577-5fv2k\" (UID: \"1e6a39a0-3609-42c3-8532-b6f8ceffda42\") " pod="kube-system/coredns-66bc5c9577-5fv2k"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.979844    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstcs\" (UniqueName: \"kubernetes.io/projected/019ff04f-8424-4ee8-954f-eb1c487771ce-kube-api-access-jstcs\") pod \"coredns-66bc5c9577-t7b79\" (UID: \"019ff04f-8424-4ee8-954f-eb1c487771ce\") " pod="kube-system/coredns-66bc5c9577-t7b79"
	Dec 13 13:38:05 pause-484783 kubelet[1308]: I1213 13:38:05.979931    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/019ff04f-8424-4ee8-954f-eb1c487771ce-config-volume\") pod \"coredns-66bc5c9577-t7b79\" (UID: \"019ff04f-8424-4ee8-954f-eb1c487771ce\") " pod="kube-system/coredns-66bc5c9577-t7b79"
	Dec 13 13:38:06 pause-484783 kubelet[1308]: I1213 13:38:06.377365    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t7b79" podStartSLOduration=42.377341812 podStartE2EDuration="42.377341812s" podCreationTimestamp="2025-12-13 13:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:38:06.376988023 +0000 UTC m=+48.238047765" watchObservedRunningTime="2025-12-13 13:38:06.377341812 +0000 UTC m=+48.238401554"
	Dec 13 13:38:10 pause-484783 kubelet[1308]: W1213 13:38:10.248006    1308 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248106    1308 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248199    1308 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248220    1308 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 13 13:38:10 pause-484783 kubelet[1308]: E1213 13:38:10.248235    1308 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 13 13:38:16 pause-484783 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:38:16 pause-484783 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:38:16 pause-484783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:38:16 pause-484783 systemd[1]: kubelet.service: Consumed 2.393s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484783 -n pause-484783
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-484783 -n pause-484783: exit status 2 (347.361542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-484783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-417583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-417583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.116346ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:45:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-417583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-417583 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-417583 describe deploy/metrics-server -n kube-system: exit status 1 (84.44284ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-417583 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-417583
helpers_test.go:244: (dbg) docker inspect old-k8s-version-417583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6",
	        "Created": "2025-12-13T13:44:18.6267097Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 692340,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:44:18.69253169Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/hostname",
	        "HostsPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/hosts",
	        "LogPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6-json.log",
	        "Name": "/old-k8s-version-417583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-417583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-417583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6",
	                "LowerDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-417583",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-417583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-417583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-417583",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-417583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ba7232d7b147fbb4cefac7aa1e96ded95d64676c517bd674037500483ea2d23e",
	            "SandboxKey": "/var/run/docker/netns/ba7232d7b147",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-417583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cde7b54cbcc8a3b8ab40bd9dd21786e91e0af49dc344d306865f5245da4b5481",
	                    "EndpointID": "2c044f398951d164da1684bd63b7e9392fe21b791a13a133c86dcfa41ada8420",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "b6:a7:37:f0:1a:f8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-417583",
	                        "43fbdd9bc16f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-417583 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-417583 logs -n 25: (1.221181842s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-884214 sudo systemctl cat docker --no-pager                                                                                                 │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo cat /etc/docker/daemon.json                                                                                                     │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │                     │
	│ ssh     │ -p flannel-884214 sudo docker system info                                                                                                              │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │                     │
	│ ssh     │ -p flannel-884214 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │                     │
	│ ssh     │ -p flannel-884214 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │                     │
	│ ssh     │ -p flannel-884214 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo cri-dockerd --version                                                                                                           │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │                     │
	│ ssh     │ -p flannel-884214 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo containerd config dump                                                                                                          │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p flannel-884214 sudo crio config                                                                                                                     │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ delete  │ -p flannel-884214                                                                                                                                      │ flannel-884214         │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ embed-certs-973953     │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │                     │
	│ ssh     │ -p bridge-884214 pgrep -a kubelet                                                                                                                      │ bridge-884214          │ jenkins │ v1.37.0 │ 13 Dec 25 13:44 UTC │ 13 Dec 25 13:44 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/nsswitch.conf                                                                                                           │ bridge-884214          │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/hosts                                                                                                                   │ bridge-884214          │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-417583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-417583 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo cat /etc/resolv.conf                                                                                                             │ bridge-884214          │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo crictl pods                                                                                                                      │ bridge-884214          │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:44:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:44:51.584714  706714 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:44:51.585002  706714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:44:51.585013  706714 out.go:374] Setting ErrFile to fd 2...
	I1213 13:44:51.585018  706714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:44:51.585265  706714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:44:51.585763  706714 out.go:368] Setting JSON to false
	I1213 13:44:51.587108  706714 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8840,"bootTime":1765624652,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:44:51.587163  706714 start.go:143] virtualization: kvm guest
	I1213 13:44:51.589221  706714 out.go:179] * [embed-certs-973953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:44:51.590676  706714 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:44:51.590698  706714 notify.go:221] Checking for updates...
	I1213 13:44:51.592750  706714 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:44:51.594481  706714 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:44:51.595657  706714 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:44:51.596841  706714 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:44:51.598096  706714 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:44:51.599673  706714 config.go:182] Loaded profile config "bridge-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:44:51.599856  706714 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:44:51.600009  706714 config.go:182] Loaded profile config "old-k8s-version-417583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 13:44:51.600120  706714 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:44:51.626355  706714 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:44:51.626450  706714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:44:51.688043  706714 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-13 13:44:51.677378549 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:44:51.688199  706714 docker.go:319] overlay module found
	I1213 13:44:51.689893  706714 out.go:179] * Using the docker driver based on user configuration
	I1213 13:44:51.690940  706714 start.go:309] selected driver: docker
	I1213 13:44:51.690957  706714 start.go:927] validating driver "docker" against <nil>
	I1213 13:44:51.690968  706714 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:44:51.691689  706714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:44:51.750121  706714 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:88 SystemTime:2025-12-13 13:44:51.739545928 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:44:51.750299  706714 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:44:51.750529  706714 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:44:51.752091  706714 out.go:179] * Using Docker driver with root privileges
	I1213 13:44:51.753168  706714 cni.go:84] Creating CNI manager for ""
	I1213 13:44:51.753247  706714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:44:51.753262  706714 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:44:51.753336  706714 start.go:353] cluster config:
	{Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:44:51.754501  706714 out.go:179] * Starting "embed-certs-973953" primary control-plane node in "embed-certs-973953" cluster
	I1213 13:44:51.755639  706714 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:44:51.756751  706714 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:44:51.757812  706714 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:44:51.757841  706714 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:44:51.757860  706714 cache.go:65] Caching tarball of preloaded images
	I1213 13:44:51.757915  706714 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:44:51.757970  706714 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:44:51.757985  706714 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:44:51.758075  706714 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/config.json ...
	I1213 13:44:51.758095  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/config.json: {Name:mk2878c26ca5e2c05f75369cf5054ff65e3a84a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:51.779451  706714 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:44:51.779470  706714 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:44:51.779486  706714 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:44:51.779525  706714 start.go:360] acquireMachinesLock for embed-certs-973953: {Name:mk9bf136673a37f733c3ece23bc4966d2c2ebc12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:44:51.779656  706714 start.go:364] duration metric: took 105.367µs to acquireMachinesLock for "embed-certs-973953"
	I1213 13:44:51.779686  706714 start.go:93] Provisioning new machine with config: &{Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:44:51.779804  706714 start.go:125] createHost starting for "" (driver="docker")
	W1213 13:44:49.355427  682320 pod_ready.go:104] pod "coredns-66bc5c9577-4qgw6" is not "Ready", error: <nil>
	I1213 13:44:50.871480  682320 pod_ready.go:94] pod "coredns-66bc5c9577-4qgw6" is "Ready"
	I1213 13:44:50.871508  682320 pod_ready.go:86] duration metric: took 38.021251583s for pod "coredns-66bc5c9577-4qgw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.871517  682320 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gsvdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.873468  682320 pod_ready.go:99] pod "coredns-66bc5c9577-gsvdw" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-gsvdw" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-gsvdw" not found
	I1213 13:44:50.873490  682320 pod_ready.go:86] duration metric: took 1.967427ms for pod "coredns-66bc5c9577-gsvdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.877909  682320 pod_ready.go:83] waiting for pod "etcd-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.883089  682320 pod_ready.go:94] pod "etcd-bridge-884214" is "Ready"
	I1213 13:44:50.883112  682320 pod_ready.go:86] duration metric: took 5.179095ms for pod "etcd-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.885452  682320 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.889513  682320 pod_ready.go:94] pod "kube-apiserver-bridge-884214" is "Ready"
	I1213 13:44:50.889532  682320 pod_ready.go:86] duration metric: took 4.056812ms for pod "kube-apiserver-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:50.978433  682320 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:51.254394  682320 pod_ready.go:94] pod "kube-controller-manager-bridge-884214" is "Ready"
	I1213 13:44:51.254424  682320 pod_ready.go:86] duration metric: took 275.961753ms for pod "kube-controller-manager-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:51.454968  682320 pod_ready.go:83] waiting for pod "kube-proxy-vg6w5" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:51.854617  682320 pod_ready.go:94] pod "kube-proxy-vg6w5" is "Ready"
	I1213 13:44:51.854646  682320 pod_ready.go:86] duration metric: took 399.649851ms for pod "kube-proxy-vg6w5" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:52.054996  682320 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:52.457227  682320 pod_ready.go:94] pod "kube-scheduler-bridge-884214" is "Ready"
	I1213 13:44:52.457264  682320 pod_ready.go:86] duration metric: took 402.241067ms for pod "kube-scheduler-bridge-884214" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:52.457280  682320 pod_ready.go:40] duration metric: took 39.611032165s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:44:52.513295  682320 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:44:52.515359  682320 out.go:179] * Done! kubectl is now configured to use "bridge-884214" cluster and "default" namespace by default
	W1213 13:44:49.953458  691434 node_ready.go:57] node "old-k8s-version-417583" has "Ready":"False" status (will retry)
	W1213 13:44:52.435682  691434 node_ready.go:57] node "old-k8s-version-417583" has "Ready":"False" status (will retry)
	I1213 13:44:49.169546  698745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:44:49.183804  698745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 13:44:49.187995  698745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 13:44:49.188023  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1213 13:44:49.390036  698745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 13:44:49.394529  698745 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 13:44:49.394561  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1213 13:44:49.557209  698745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:44:49.565769  698745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:44:49.578711  698745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:44:49.679550  698745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1213 13:44:49.704857  698745 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:44:49.709366  698745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:44:49.720617  698745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:44:49.801702  698745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:44:49.829162  698745 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258 for IP: 192.168.85.2
	I1213 13:44:49.829184  698745 certs.go:195] generating shared ca certs ...
	I1213 13:44:49.829204  698745 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:49.829410  698745 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:44:49.829484  698745 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:44:49.829499  698745 certs.go:257] generating profile certs ...
	I1213 13:44:49.829579  698745 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/client.key
	I1213 13:44:49.829597  698745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/client.crt with IP's: []
	I1213 13:44:50.092683  698745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/client.crt ...
	I1213 13:44:50.092716  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/client.crt: {Name:mk491b48bf9708b24faa7ab56a2b491e7bb86b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:50.092924  698745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/client.key ...
	I1213 13:44:50.092943  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/client.key: {Name:mk732918537ae96e591eef960bfdd204829a8957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:50.093061  698745 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.key.4015cdc3
	I1213 13:44:50.093081  698745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.crt.4015cdc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 13:44:50.171731  698745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.crt.4015cdc3 ...
	I1213 13:44:50.171766  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.crt.4015cdc3: {Name:mkcc53a10c8441d8d5c4d754c4727fdabd3a41c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:50.171945  698745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.key.4015cdc3 ...
	I1213 13:44:50.171965  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.key.4015cdc3: {Name:mk14f06ca87c1388ed2aad9d250b60ad0c21596f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:50.172068  698745 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.crt.4015cdc3 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.crt
	I1213 13:44:50.172188  698745 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.key.4015cdc3 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.key
	I1213 13:44:50.172283  698745 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.key
	I1213 13:44:50.172305  698745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.crt with IP's: []
	I1213 13:44:50.233853  698745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.crt ...
	I1213 13:44:50.233882  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.crt: {Name:mk7f004dbcdc82b1c061cc4bacb8ef717c18cc98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:50.234064  698745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.key ...
	I1213 13:44:50.234084  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.key: {Name:mke64b4a02a8526bed0ff555a92852ff8876a8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:44:50.234295  698745 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:44:50.234346  698745 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:44:50.234359  698745 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:44:50.234395  698745 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:44:50.234430  698745 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:44:50.234474  698745 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:44:50.234548  698745 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:44:50.235213  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:44:50.253844  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:44:50.271631  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:44:50.289431  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:44:50.307276  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:44:50.325691  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 13:44:50.343759  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:44:50.363455  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/no-preload-992258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:44:50.381957  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:44:50.550836  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:44:50.569155  698745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:44:50.587254  698745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:44:50.600170  698745 ssh_runner.go:195] Run: openssl version
	I1213 13:44:50.606798  698745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:44:50.614740  698745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:44:50.622270  698745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:44:50.625978  698745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:44:50.626033  698745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:44:50.673355  698745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:44:50.683183  698745 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:44:50.691061  698745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:44:50.698711  698745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:44:50.706033  698745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:44:50.709891  698745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:44:50.709944  698745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:44:50.744666  698745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:44:50.752728  698745 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:44:50.760208  698745 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:44:50.767381  698745 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:44:50.774623  698745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:44:50.778413  698745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:44:50.778463  698745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:44:50.815284  698745 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:44:50.823662  698745 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:44:50.831163  698745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:44:50.834927  698745 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:44:50.834984  698745 kubeadm.go:401] StartCluster: {Name:no-preload-992258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-992258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:44:50.835070  698745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:44:50.835110  698745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:44:50.863310  698745 cri.go:89] found id: ""
	I1213 13:44:50.863388  698745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:44:50.874700  698745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:44:50.885790  698745 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:44:50.885844  698745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:44:50.895362  698745 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:44:50.895400  698745 kubeadm.go:158] found existing configuration files:
	
	I1213 13:44:50.895450  698745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:44:50.904389  698745 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:44:50.904443  698745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:44:50.913418  698745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:44:50.921360  698745 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:44:50.921413  698745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:44:50.929103  698745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:44:50.937013  698745 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:44:50.937069  698745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:44:50.944764  698745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:44:50.952399  698745 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:44:50.952449  698745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:44:50.959857  698745 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:44:51.069662  698745 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:44:51.135457  698745 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:44:51.781503  706714 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:44:51.781785  706714 start.go:159] libmachine.API.Create for "embed-certs-973953" (driver="docker")
	I1213 13:44:51.781824  706714 client.go:173] LocalClient.Create starting
	I1213 13:44:51.781925  706714 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:44:51.781968  706714 main.go:143] libmachine: Decoding PEM data...
	I1213 13:44:51.781999  706714 main.go:143] libmachine: Parsing certificate...
	I1213 13:44:51.782073  706714 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:44:51.782101  706714 main.go:143] libmachine: Decoding PEM data...
	I1213 13:44:51.782118  706714 main.go:143] libmachine: Parsing certificate...
	I1213 13:44:51.782544  706714 cli_runner.go:164] Run: docker network inspect embed-certs-973953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:44:51.799388  706714 cli_runner.go:211] docker network inspect embed-certs-973953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:44:51.799484  706714 network_create.go:284] running [docker network inspect embed-certs-973953] to gather additional debugging logs...
	I1213 13:44:51.799506  706714 cli_runner.go:164] Run: docker network inspect embed-certs-973953
	W1213 13:44:51.817082  706714 cli_runner.go:211] docker network inspect embed-certs-973953 returned with exit code 1
	I1213 13:44:51.817114  706714 network_create.go:287] error running [docker network inspect embed-certs-973953]: docker network inspect embed-certs-973953: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-973953 not found
	I1213 13:44:51.817130  706714 network_create.go:289] output of [docker network inspect embed-certs-973953]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-973953 not found
	
	** /stderr **
	I1213 13:44:51.817225  706714 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:44:51.836223  706714 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:44:51.836990  706714 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:44:51.837430  706714 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:44:51.838070  706714 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cde7b54cbcc8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:9a:1b:7f:ad:0e} reservation:<nil>}
	I1213 13:44:51.838806  706714 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6b03146af257 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:46:73:e9:11:84:32} reservation:<nil>}
	I1213 13:44:51.839345  706714 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b8319a68d64a IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:35:bd:4d:47:93} reservation:<nil>}
	I1213 13:44:51.840190  706714 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8aff0}
	I1213 13:44:51.840219  706714 network_create.go:124] attempt to create docker network embed-certs-973953 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1213 13:44:51.840282  706714 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-973953 embed-certs-973953
	I1213 13:44:51.888267  706714 network_create.go:108] docker network embed-certs-973953 192.168.103.0/24 created
	I1213 13:44:51.888296  706714 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-973953" container
	I1213 13:44:51.888380  706714 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:44:51.907026  706714 cli_runner.go:164] Run: docker volume create embed-certs-973953 --label name.minikube.sigs.k8s.io=embed-certs-973953 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:44:51.924328  706714 oci.go:103] Successfully created a docker volume embed-certs-973953
	I1213 13:44:51.924399  706714 cli_runner.go:164] Run: docker run --rm --name embed-certs-973953-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-973953 --entrypoint /usr/bin/test -v embed-certs-973953:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:44:52.318363  706714 oci.go:107] Successfully prepared a docker volume embed-certs-973953
	I1213 13:44:52.318452  706714 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:44:52.318466  706714 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:44:52.318522  706714 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-973953:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	W1213 13:44:54.934940  691434 node_ready.go:57] node "old-k8s-version-417583" has "Ready":"False" status (will retry)
	W1213 13:44:56.935371  691434 node_ready.go:57] node "old-k8s-version-417583" has "Ready":"False" status (will retry)
	I1213 13:44:59.191574  698745 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 13:44:59.191643  698745 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:44:59.191755  698745 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:44:59.191851  698745 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:44:59.191893  698745 kubeadm.go:319] OS: Linux
	I1213 13:44:59.191950  698745 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:44:59.192009  698745 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:44:59.192069  698745 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:44:59.192141  698745 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:44:59.192198  698745 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:44:59.192241  698745 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:44:59.192319  698745 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:44:59.192387  698745 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:44:59.192503  698745 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:44:59.192665  698745 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:44:59.192840  698745 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:44:59.192942  698745 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:44:59.194291  698745 out.go:252]   - Generating certificates and keys ...
	I1213 13:44:59.194393  698745 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:44:59.194502  698745 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:44:59.194616  698745 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:44:59.194689  698745 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:44:59.194796  698745 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:44:59.194885  698745 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:44:59.194962  698745 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:44:59.195153  698745 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-992258] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 13:44:59.195240  698745 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:44:59.195426  698745 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-992258] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 13:44:59.195527  698745 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:44:59.195628  698745 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:44:59.195707  698745 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:44:59.195809  698745 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:44:59.195905  698745 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:44:59.195990  698745 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:44:59.196062  698745 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:44:59.196171  698745 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:44:59.196246  698745 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:44:59.196318  698745 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:44:59.196416  698745 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:44:59.198283  698745 out.go:252]   - Booting up control plane ...
	I1213 13:44:59.198372  698745 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:44:59.198458  698745 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:44:59.198538  698745 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:44:59.198654  698745 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:44:59.198808  698745 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:44:59.198946  698745 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:44:59.199031  698745 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:44:59.199098  698745 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:44:59.199300  698745 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:44:59.199455  698745 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:44:59.199525  698745 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.915814ms
	I1213 13:44:59.199658  698745 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:44:59.199805  698745 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 13:44:59.199941  698745 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:44:59.200032  698745 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:44:59.200120  698745 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005777136s
	I1213 13:44:59.200177  698745 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.058644589s
	I1213 13:44:59.200248  698745 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50167421s
	I1213 13:44:59.200409  698745 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:44:59.200575  698745 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:44:59.200634  698745 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:44:59.200872  698745 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-992258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:44:59.200962  698745 kubeadm.go:319] [bootstrap-token] Using token: 33nmx1.h70s3ui0yogvgrrx
	I1213 13:44:59.202037  698745 out.go:252]   - Configuring RBAC rules ...
	I1213 13:44:59.202174  698745 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:44:59.202303  698745 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:44:59.202463  698745 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:44:59.202611  698745 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:44:59.202766  698745 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:44:59.202925  698745 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:44:59.203075  698745 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:44:59.203151  698745 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:44:59.203202  698745 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:44:59.203213  698745 kubeadm.go:319] 
	I1213 13:44:59.203272  698745 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:44:59.203280  698745 kubeadm.go:319] 
	I1213 13:44:59.203344  698745 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:44:59.203351  698745 kubeadm.go:319] 
	I1213 13:44:59.203376  698745 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:44:59.203452  698745 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:44:59.203533  698745 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:44:59.203550  698745 kubeadm.go:319] 
	I1213 13:44:59.203640  698745 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:44:59.203653  698745 kubeadm.go:319] 
	I1213 13:44:59.203733  698745 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:44:59.203747  698745 kubeadm.go:319] 
	I1213 13:44:59.203840  698745 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:44:59.203959  698745 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:44:59.204046  698745 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:44:59.204054  698745 kubeadm.go:319] 
	I1213 13:44:59.204147  698745 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:44:59.204260  698745 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:44:59.204273  698745 kubeadm.go:319] 
	I1213 13:44:59.204343  698745 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 33nmx1.h70s3ui0yogvgrrx \
	I1213 13:44:59.204438  698745 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 \
	I1213 13:44:59.204458  698745 kubeadm.go:319] 	--control-plane 
	I1213 13:44:59.204465  698745 kubeadm.go:319] 
	I1213 13:44:59.204534  698745 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:44:59.204553  698745 kubeadm.go:319] 
	I1213 13:44:59.204625  698745 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 33nmx1.h70s3ui0yogvgrrx \
	I1213 13:44:59.204746  698745 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 
	I1213 13:44:59.204768  698745 cni.go:84] Creating CNI manager for ""
	I1213 13:44:59.204790  698745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:44:59.206058  698745 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 13:44:58.934898  691434 node_ready.go:49] node "old-k8s-version-417583" is "Ready"
	I1213 13:44:58.934927  691434 node_ready.go:38] duration metric: took 13.503124453s for node "old-k8s-version-417583" to be "Ready" ...
	I1213 13:44:58.934942  691434 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:44:58.934991  691434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:44:58.947469  691434 api_server.go:72] duration metric: took 13.940929637s to wait for apiserver process to appear ...
	I1213 13:44:58.947493  691434 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:44:58.947512  691434 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:44:58.953045  691434 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 13:44:58.954209  691434 api_server.go:141] control plane version: v1.28.0
	I1213 13:44:58.954234  691434 api_server.go:131] duration metric: took 6.734666ms to wait for apiserver health ...
	I1213 13:44:58.954246  691434 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:44:58.958652  691434 system_pods.go:59] 8 kube-system pods found
	I1213 13:44:58.958687  691434 system_pods.go:61] "coredns-5dd5756b68-88x45" [eb8ec109-32d0-4f39-a94c-f5e8190aa012] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:44:58.958694  691434 system_pods.go:61] "etcd-old-k8s-version-417583" [d35f1d9e-1a0c-40a1-a106-bdd0774af086] Running
	I1213 13:44:58.958705  691434 system_pods.go:61] "kindnet-qnxmc" [312f48e9-7f92-45ef-9351-a4565768c8b0] Running
	I1213 13:44:58.958712  691434 system_pods.go:61] "kube-apiserver-old-k8s-version-417583" [295c33a9-3c78-4cea-8bca-f0f5e464ba54] Running
	I1213 13:44:58.958722  691434 system_pods.go:61] "kube-controller-manager-old-k8s-version-417583" [02c61691-3eb4-49e2-bc02-4c3bde0db505] Running
	I1213 13:44:58.958731  691434 system_pods.go:61] "kube-proxy-r84xd" [c7442d18-20f9-4a27-b079-4e86e2804ab7] Running
	I1213 13:44:58.958739  691434 system_pods.go:61] "kube-scheduler-old-k8s-version-417583" [310fcd5f-ff24-4da5-a8a1-14d4f47a74ce] Running
	I1213 13:44:58.958747  691434 system_pods.go:61] "storage-provisioner" [8abf66a9-bb12-4644-b554-8a3ba8ce489a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:44:58.958759  691434 system_pods.go:74] duration metric: took 4.505935ms to wait for pod list to return data ...
	I1213 13:44:58.958771  691434 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:44:58.961015  691434 default_sa.go:45] found service account: "default"
	I1213 13:44:58.961033  691434 default_sa.go:55] duration metric: took 2.22501ms for default service account to be created ...
	I1213 13:44:58.961041  691434 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:44:58.964195  691434 system_pods.go:86] 8 kube-system pods found
	I1213 13:44:58.964218  691434 system_pods.go:89] "coredns-5dd5756b68-88x45" [eb8ec109-32d0-4f39-a94c-f5e8190aa012] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:44:58.964224  691434 system_pods.go:89] "etcd-old-k8s-version-417583" [d35f1d9e-1a0c-40a1-a106-bdd0774af086] Running
	I1213 13:44:58.964231  691434 system_pods.go:89] "kindnet-qnxmc" [312f48e9-7f92-45ef-9351-a4565768c8b0] Running
	I1213 13:44:58.964235  691434 system_pods.go:89] "kube-apiserver-old-k8s-version-417583" [295c33a9-3c78-4cea-8bca-f0f5e464ba54] Running
	I1213 13:44:58.964239  691434 system_pods.go:89] "kube-controller-manager-old-k8s-version-417583" [02c61691-3eb4-49e2-bc02-4c3bde0db505] Running
	I1213 13:44:58.964242  691434 system_pods.go:89] "kube-proxy-r84xd" [c7442d18-20f9-4a27-b079-4e86e2804ab7] Running
	I1213 13:44:58.964245  691434 system_pods.go:89] "kube-scheduler-old-k8s-version-417583" [310fcd5f-ff24-4da5-a8a1-14d4f47a74ce] Running
	I1213 13:44:58.964250  691434 system_pods.go:89] "storage-provisioner" [8abf66a9-bb12-4644-b554-8a3ba8ce489a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:44:58.964268  691434 retry.go:31] will retry after 265.784007ms: missing components: kube-dns
	I1213 13:44:59.234398  691434 system_pods.go:86] 8 kube-system pods found
	I1213 13:44:59.234434  691434 system_pods.go:89] "coredns-5dd5756b68-88x45" [eb8ec109-32d0-4f39-a94c-f5e8190aa012] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:44:59.234442  691434 system_pods.go:89] "etcd-old-k8s-version-417583" [d35f1d9e-1a0c-40a1-a106-bdd0774af086] Running
	I1213 13:44:59.234451  691434 system_pods.go:89] "kindnet-qnxmc" [312f48e9-7f92-45ef-9351-a4565768c8b0] Running
	I1213 13:44:59.234457  691434 system_pods.go:89] "kube-apiserver-old-k8s-version-417583" [295c33a9-3c78-4cea-8bca-f0f5e464ba54] Running
	I1213 13:44:59.234464  691434 system_pods.go:89] "kube-controller-manager-old-k8s-version-417583" [02c61691-3eb4-49e2-bc02-4c3bde0db505] Running
	I1213 13:44:59.234471  691434 system_pods.go:89] "kube-proxy-r84xd" [c7442d18-20f9-4a27-b079-4e86e2804ab7] Running
	I1213 13:44:59.234480  691434 system_pods.go:89] "kube-scheduler-old-k8s-version-417583" [310fcd5f-ff24-4da5-a8a1-14d4f47a74ce] Running
	I1213 13:44:59.234488  691434 system_pods.go:89] "storage-provisioner" [8abf66a9-bb12-4644-b554-8a3ba8ce489a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:44:59.234511  691434 retry.go:31] will retry after 377.420277ms: missing components: kube-dns
	I1213 13:44:59.616638  691434 system_pods.go:86] 8 kube-system pods found
	I1213 13:44:59.616665  691434 system_pods.go:89] "coredns-5dd5756b68-88x45" [eb8ec109-32d0-4f39-a94c-f5e8190aa012] Running
	I1213 13:44:59.616670  691434 system_pods.go:89] "etcd-old-k8s-version-417583" [d35f1d9e-1a0c-40a1-a106-bdd0774af086] Running
	I1213 13:44:59.616674  691434 system_pods.go:89] "kindnet-qnxmc" [312f48e9-7f92-45ef-9351-a4565768c8b0] Running
	I1213 13:44:59.616678  691434 system_pods.go:89] "kube-apiserver-old-k8s-version-417583" [295c33a9-3c78-4cea-8bca-f0f5e464ba54] Running
	I1213 13:44:59.616683  691434 system_pods.go:89] "kube-controller-manager-old-k8s-version-417583" [02c61691-3eb4-49e2-bc02-4c3bde0db505] Running
	I1213 13:44:59.616686  691434 system_pods.go:89] "kube-proxy-r84xd" [c7442d18-20f9-4a27-b079-4e86e2804ab7] Running
	I1213 13:44:59.616689  691434 system_pods.go:89] "kube-scheduler-old-k8s-version-417583" [310fcd5f-ff24-4da5-a8a1-14d4f47a74ce] Running
	I1213 13:44:59.616692  691434 system_pods.go:89] "storage-provisioner" [8abf66a9-bb12-4644-b554-8a3ba8ce489a] Running
	I1213 13:44:59.616700  691434 system_pods.go:126] duration metric: took 655.652789ms to wait for k8s-apps to be running ...
	I1213 13:44:59.616708  691434 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:44:59.616754  691434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:44:59.630290  691434 system_svc.go:56] duration metric: took 13.570096ms WaitForService to wait for kubelet
	I1213 13:44:59.630326  691434 kubeadm.go:587] duration metric: took 14.623790745s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:44:59.630351  691434 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:44:59.632657  691434 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:44:59.632696  691434 node_conditions.go:123] node cpu capacity is 8
	I1213 13:44:59.632717  691434 node_conditions.go:105] duration metric: took 2.359716ms to run NodePressure ...
	I1213 13:44:59.632735  691434 start.go:242] waiting for startup goroutines ...
	I1213 13:44:59.632750  691434 start.go:247] waiting for cluster config update ...
	I1213 13:44:59.632765  691434 start.go:256] writing updated cluster config ...
	I1213 13:44:59.633077  691434 ssh_runner.go:195] Run: rm -f paused
	I1213 13:44:59.636977  691434 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:44:59.640625  691434 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-88x45" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:59.644644  691434 pod_ready.go:94] pod "coredns-5dd5756b68-88x45" is "Ready"
	I1213 13:44:59.644664  691434 pod_ready.go:86] duration metric: took 4.011685ms for pod "coredns-5dd5756b68-88x45" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:59.646995  691434 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:59.650505  691434 pod_ready.go:94] pod "etcd-old-k8s-version-417583" is "Ready"
	I1213 13:44:59.650532  691434 pod_ready.go:86] duration metric: took 3.515491ms for pod "etcd-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:59.653023  691434 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:59.656560  691434 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-417583" is "Ready"
	I1213 13:44:59.656577  691434 pod_ready.go:86] duration metric: took 3.537307ms for pod "kube-apiserver-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:44:59.658931  691434 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:00.041151  691434 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-417583" is "Ready"
	I1213 13:45:00.041183  691434 pod_ready.go:86] duration metric: took 382.230999ms for pod "kube-controller-manager-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:00.242164  691434 pod_ready.go:83] waiting for pod "kube-proxy-r84xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:00.641167  691434 pod_ready.go:94] pod "kube-proxy-r84xd" is "Ready"
	I1213 13:45:00.641192  691434 pod_ready.go:86] duration metric: took 399.005161ms for pod "kube-proxy-r84xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:00.841400  691434 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:01.241169  691434 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-417583" is "Ready"
	I1213 13:45:01.241192  691434 pod_ready.go:86] duration metric: took 399.767673ms for pod "kube-scheduler-old-k8s-version-417583" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:01.241204  691434 pod_ready.go:40] duration metric: took 1.604196225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:45:01.286298  691434 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1213 13:45:01.288353  691434 out.go:203] 
	W1213 13:45:01.289505  691434 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 13:45:01.290661  691434 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 13:45:01.292098  691434 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-417583" cluster and "default" namespace by default
	I1213 13:44:57.108396  706714 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-973953:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.789799134s)
	I1213 13:44:57.108448  706714 kic.go:203] duration metric: took 4.789977205s to extract preloaded images to volume ...
	W1213 13:44:57.108574  706714 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:44:57.108614  706714 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:44:57.108691  706714 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:44:57.180710  706714 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-973953 --name embed-certs-973953 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-973953 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-973953 --network embed-certs-973953 --ip 192.168.103.2 --volume embed-certs-973953:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:44:57.562674  706714 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Running}}
	I1213 13:44:57.587475  706714 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:44:57.610707  706714 cli_runner.go:164] Run: docker exec embed-certs-973953 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:44:57.660885  706714 oci.go:144] the created container "embed-certs-973953" has a running status.
	I1213 13:44:57.660922  706714 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa...
	I1213 13:44:57.733369  706714 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:44:57.758938  706714 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:44:57.775769  706714 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:44:57.775813  706714 kic_runner.go:114] Args: [docker exec --privileged embed-certs-973953 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:44:57.845979  706714 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:44:57.872221  706714 machine.go:94] provisionDockerMachine start ...
	I1213 13:44:57.872343  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:44:57.898577  706714 main.go:143] libmachine: Using SSH client type: native
	I1213 13:44:57.899028  706714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1213 13:44:57.899082  706714 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:44:57.899807  706714 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35934->127.0.0.1:33481: read: connection reset by peer
	I1213 13:45:01.033733  706714 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-973953
	
	I1213 13:45:01.033763  706714 ubuntu.go:182] provisioning hostname "embed-certs-973953"
	I1213 13:45:01.033846  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:01.055317  706714 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:01.055610  706714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1213 13:45:01.055630  706714 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-973953 && echo "embed-certs-973953" | sudo tee /etc/hostname
	I1213 13:45:01.206282  706714 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-973953
	
	I1213 13:45:01.206368  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:01.224754  706714 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:01.225009  706714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1213 13:45:01.225027  706714 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-973953' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-973953/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-973953' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:45:01.366022  706714 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:45:01.366055  706714 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:45:01.366084  706714 ubuntu.go:190] setting up certificates
	I1213 13:45:01.366098  706714 provision.go:84] configureAuth start
	I1213 13:45:01.366150  706714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-973953
	I1213 13:45:01.385677  706714 provision.go:143] copyHostCerts
	I1213 13:45:01.385747  706714 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:45:01.385763  706714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:45:01.385866  706714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:45:01.386250  706714 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:45:01.386844  706714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:45:01.386930  706714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:45:01.387073  706714 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:45:01.387083  706714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:45:01.387122  706714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:45:01.387201  706714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.embed-certs-973953 san=[127.0.0.1 192.168.103.2 embed-certs-973953 localhost minikube]
	I1213 13:45:01.504920  706714 provision.go:177] copyRemoteCerts
	I1213 13:45:01.504989  706714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:45:01.505033  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:01.523898  706714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:45:01.622202  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:45:01.641643  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 13:45:01.660699  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:45:01.680277  706714 provision.go:87] duration metric: took 314.151562ms to configureAuth
	I1213 13:45:01.680304  706714 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:45:01.680470  706714 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:45:01.680593  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:01.699056  706714 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:01.699287  706714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1213 13:45:01.699306  706714 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:45:01.983413  706714 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:45:01.983438  706714 machine.go:97] duration metric: took 4.111184333s to provisionDockerMachine
	I1213 13:45:01.983450  706714 client.go:176] duration metric: took 10.201616512s to LocalClient.Create
	I1213 13:45:01.983470  706714 start.go:167] duration metric: took 10.201698821s to libmachine.API.Create "embed-certs-973953"
	I1213 13:45:01.983478  706714 start.go:293] postStartSetup for "embed-certs-973953" (driver="docker")
	I1213 13:45:01.983488  706714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:45:01.983551  706714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:45:01.983633  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:02.002386  706714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:45:02.100640  706714 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:45:02.104373  706714 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:45:02.104398  706714 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:45:02.104411  706714 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:45:02.104470  706714 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:45:02.104572  706714 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:45:02.104694  706714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:45:02.112885  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:45:02.133404  706714 start.go:296] duration metric: took 149.909391ms for postStartSetup
	I1213 13:45:02.133794  706714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-973953
	I1213 13:45:02.151586  706714 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/config.json ...
	I1213 13:45:02.151912  706714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:45:02.151987  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:02.170589  706714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:45:02.263938  706714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:45:02.268372  706714 start.go:128] duration metric: took 10.488549798s to createHost
	I1213 13:45:02.268397  706714 start.go:83] releasing machines lock for "embed-certs-973953", held for 10.488725634s
	I1213 13:45:02.268480  706714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-973953
	I1213 13:45:02.287253  706714 ssh_runner.go:195] Run: cat /version.json
	I1213 13:45:02.287300  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:02.287345  706714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:45:02.287425  706714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:45:02.309467  706714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:45:02.310237  706714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:45:02.472516  706714 ssh_runner.go:195] Run: systemctl --version
	I1213 13:45:02.482016  706714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:45:02.522579  706714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:45:02.527424  706714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:45:02.527484  706714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:45:02.554365  706714 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:45:02.554394  706714 start.go:496] detecting cgroup driver to use...
	I1213 13:45:02.554450  706714 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:45:02.554515  706714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:45:02.574270  706714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:45:02.588455  706714 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:45:02.588516  706714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:45:02.606487  706714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:45:02.624150  706714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:45:02.713979  706714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:45:02.806730  706714 docker.go:234] disabling docker service ...
	I1213 13:45:02.806808  706714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:45:02.825520  706714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:45:02.838006  706714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:45:02.925419  706714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:45:03.011714  706714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:45:03.033937  706714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:45:03.048984  706714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:45:03.049046  706714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.060888  706714 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:45:03.060967  706714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.070379  706714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.079477  706714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.090485  706714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:45:03.099296  706714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.109368  706714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.129500  706714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:03.139556  706714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:45:03.147952  706714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:45:03.156682  706714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:03.248699  706714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:45:03.404893  706714 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:45:03.404980  706714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:45:03.409538  706714 start.go:564] Will wait 60s for crictl version
	I1213 13:45:03.409605  706714 ssh_runner.go:195] Run: which crictl
	I1213 13:45:03.413876  706714 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:45:03.440300  706714 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:45:03.440398  706714 ssh_runner.go:195] Run: crio --version
	I1213 13:45:03.472856  706714 ssh_runner.go:195] Run: crio --version
	I1213 13:45:03.507392  706714 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 13:44:59.207093  698745 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 13:44:59.211584  698745 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1213 13:44:59.211602  698745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 13:44:59.224678  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 13:44:59.429212  698745 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:44:59.429291  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:44:59.429320  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-992258 minikube.k8s.io/updated_at=2025_12_13T13_44_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=no-preload-992258 minikube.k8s.io/primary=true
	I1213 13:44:59.439400  698745 ops.go:34] apiserver oom_adj: -16
	I1213 13:44:59.517253  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:00.017979  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:00.517798  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:01.018336  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:01.517784  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:02.017405  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:02.517411  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:03.017400  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:03.518046  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:04.017582  698745 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:45:04.092041  698745 kubeadm.go:1114] duration metric: took 4.662810406s to wait for elevateKubeSystemPrivileges
	I1213 13:45:04.092076  698745 kubeadm.go:403] duration metric: took 13.257097352s to StartCluster
	I1213 13:45:04.092119  698745 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:04.092205  698745 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:45:04.093552  698745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:04.093813  698745 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:45:04.093835  698745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:45:04.093919  698745 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:45:04.094022  698745 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:45:04.094048  698745 addons.go:70] Setting storage-provisioner=true in profile "no-preload-992258"
	I1213 13:45:04.094075  698745 addons.go:239] Setting addon storage-provisioner=true in "no-preload-992258"
	I1213 13:45:04.094086  698745 addons.go:70] Setting default-storageclass=true in profile "no-preload-992258"
	I1213 13:45:04.094105  698745 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-992258"
	I1213 13:45:04.094108  698745 host.go:66] Checking if "no-preload-992258" exists ...
	I1213 13:45:04.094548  698745 cli_runner.go:164] Run: docker container inspect no-preload-992258 --format={{.State.Status}}
	I1213 13:45:04.094742  698745 cli_runner.go:164] Run: docker container inspect no-preload-992258 --format={{.State.Status}}
	I1213 13:45:04.095221  698745 out.go:179] * Verifying Kubernetes components...
	I1213 13:45:04.096683  698745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:04.120539  698745 addons.go:239] Setting addon default-storageclass=true in "no-preload-992258"
	I1213 13:45:04.120599  698745 host.go:66] Checking if "no-preload-992258" exists ...
	I1213 13:45:04.121114  698745 cli_runner.go:164] Run: docker container inspect no-preload-992258 --format={{.State.Status}}
	I1213 13:45:04.122539  698745 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:45:04.123687  698745 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:45:04.123728  698745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:45:04.124275  698745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-992258
	I1213 13:45:04.153526  698745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/no-preload-992258/id_rsa Username:docker}
	I1213 13:45:04.154757  698745 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:45:04.155099  698745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:45:04.155310  698745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-992258
	I1213 13:45:04.191440  698745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/no-preload-992258/id_rsa Username:docker}
	I1213 13:45:04.208083  698745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:45:04.281017  698745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:45:04.284536  698745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:45:04.309058  698745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:45:04.396042  698745 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1213 13:45:04.397344  698745 node_ready.go:35] waiting up to 6m0s for node "no-preload-992258" to be "Ready" ...
	I1213 13:45:04.618334  698745 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 13:45:03.508624  706714 cli_runner.go:164] Run: docker network inspect embed-certs-973953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:45:03.528260  706714 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1213 13:45:03.533479  706714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:45:03.545072  706714 kubeadm.go:884] updating cluster {Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:45:03.545188  706714 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:45:03.545247  706714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:45:03.588237  706714 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:45:03.588267  706714 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:45:03.588328  706714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:45:03.618488  706714 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:45:03.618509  706714 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:45:03.618517  706714 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1213 13:45:03.618621  706714 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-973953 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:45:03.618705  706714 ssh_runner.go:195] Run: crio config
	I1213 13:45:03.667258  706714 cni.go:84] Creating CNI manager for ""
	I1213 13:45:03.667284  706714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:45:03.667305  706714 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:45:03.667335  706714 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-973953 NodeName:embed-certs-973953 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:45:03.667497  706714 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-973953"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:45:03.667573  706714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:45:03.676577  706714 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:45:03.676641  706714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:45:03.684945  706714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1213 13:45:03.698140  706714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:45:03.713980  706714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1213 13:45:03.726798  706714 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:45:03.730819  706714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:45:03.741234  706714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:03.834019  706714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:45:03.866729  706714 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953 for IP: 192.168.103.2
	I1213 13:45:03.866758  706714 certs.go:195] generating shared ca certs ...
	I1213 13:45:03.866792  706714 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:03.866974  706714 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:45:03.867032  706714 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:45:03.867046  706714 certs.go:257] generating profile certs ...
	I1213 13:45:03.867132  706714 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.key
	I1213 13:45:03.867158  706714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.crt with IP's: []
	I1213 13:45:03.921488  706714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.crt ...
	I1213 13:45:03.921522  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.crt: {Name:mk9e5678ee457bb9044d38c9b28e4d495cfe5da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:03.921721  706714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.key ...
	I1213 13:45:03.921740  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.key: {Name:mkee70bd0c308cc574ee58f0198c4a11f95ae082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:03.921875  706714 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key.a9523a89
	I1213 13:45:03.921898  706714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt.a9523a89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1213 13:45:04.059391  706714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt.a9523a89 ...
	I1213 13:45:04.059443  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt.a9523a89: {Name:mk9bf0c94af2f37177bba8e346973a06f8486d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:04.059690  706714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key.a9523a89 ...
	I1213 13:45:04.059717  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key.a9523a89: {Name:mk5e7778acc750994b69566743917cea22a6da84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:04.060373  706714 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt.a9523a89 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt
	I1213 13:45:04.060524  706714 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key.a9523a89 -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key
	I1213 13:45:04.060654  706714 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.key
	I1213 13:45:04.060676  706714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.crt with IP's: []
	I1213 13:45:04.138712  706714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.crt ...
	I1213 13:45:04.138751  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.crt: {Name:mk6acc376f7d36c5ad8926f50d91c2080c2f5df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:04.138987  706714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.key ...
	I1213 13:45:04.139014  706714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.key: {Name:mk08894d8fab4de7f00d29ac3f75e6b596294c62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:04.139281  706714 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:45:04.139333  706714 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:45:04.139344  706714 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:45:04.139380  706714 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:45:04.139413  706714 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:45:04.139463  706714 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:45:04.139521  706714 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:45:04.141520  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:45:04.177124  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:45:04.207222  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:45:04.234951  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:45:04.263319  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 13:45:04.285644  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:45:04.310666  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:45:04.334019  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 13:45:04.356671  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:45:04.381747  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:45:04.404573  706714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:45:04.431691  706714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:45:04.450314  706714 ssh_runner.go:195] Run: openssl version
	I1213 13:45:04.459066  706714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:45:04.469627  706714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:45:04.479526  706714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:45:04.483959  706714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:45:04.484018  706714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:45:04.539079  706714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:45:04.548087  706714 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:45:04.557852  706714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:45:04.567412  706714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:45:04.576721  706714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:45:04.581443  706714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:45:04.581529  706714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:45:04.629018  706714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:45:04.638233  706714 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:45:04.647230  706714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:04.657142  706714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:45:04.666566  706714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:04.670884  706714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:04.670963  706714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:04.706349  706714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:45:04.714575  706714 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:45:04.723329  706714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:45:04.728002  706714 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:45:04.728064  706714 kubeadm.go:401] StartCluster: {Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:04.728149  706714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:45:04.728229  706714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:45:04.760984  706714 cri.go:89] found id: ""
	I1213 13:45:04.761066  706714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:45:04.769291  706714 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:45:04.777010  706714 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:45:04.777073  706714 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:45:04.784681  706714 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:45:04.784700  706714 kubeadm.go:158] found existing configuration files:
	
	I1213 13:45:04.784744  706714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:45:04.792227  706714 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:45:04.792289  706714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:45:04.800315  706714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:45:04.807658  706714 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:45:04.807708  706714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:45:04.814740  706714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:45:04.821990  706714 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:45:04.822039  706714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:45:04.830085  706714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:45:04.838098  706714 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:45:04.838147  706714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:45:04.845468  706714 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:45:04.908745  706714 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:45:04.970215  706714 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:45:04.619673  698745 addons.go:530] duration metric: took 525.761944ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 13:45:04.901003  698745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-992258" context rescaled to 1 replicas
	W1213 13:45:06.400026  698745 node_ready.go:57] node "no-preload-992258" has "Ready":"False" status (will retry)
	W1213 13:45:08.400652  698745 node_ready.go:57] node "no-preload-992258" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 13 13:44:58 old-k8s-version-417583 crio[772]: time="2025-12-13T13:44:58.839452593Z" level=info msg="Starting container: 6859d9e2d36a55701374e83ee3086a571c05df76701b25ee14371b8e34e57344" id=353fecfc-0014-4d63-a5c0-73ee8b1d98b8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:44:58 old-k8s-version-417583 crio[772]: time="2025-12-13T13:44:58.841112218Z" level=info msg="Started container" PID=2138 containerID=6859d9e2d36a55701374e83ee3086a571c05df76701b25ee14371b8e34e57344 description=kube-system/coredns-5dd5756b68-88x45/coredns id=353fecfc-0014-4d63-a5c0-73ee8b1d98b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=913ad05715c423704767645601a091880b8f775f2e308881d0a82d279345f132
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.734744709Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fbc52a54-8722-4f31-9959-a5f122ea2826 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.734865477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.739435598Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:33693c0be286b037461b7311792d4515f90573b8b968a72580d7fbe79d859a4b UID:62a21653-fb04-4160-81ac-a3647bfa3884 NetNS:/var/run/netns/2f7e4144-182d-44b3-9b1e-b30fd7781adb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000812ad8}] Aliases:map[]}"
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.73946238Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.74984947Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:33693c0be286b037461b7311792d4515f90573b8b968a72580d7fbe79d859a4b UID:62a21653-fb04-4160-81ac-a3647bfa3884 NetNS:/var/run/netns/2f7e4144-182d-44b3-9b1e-b30fd7781adb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000812ad8}] Aliases:map[]}"
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.749999636Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.750993782Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.752148266Z" level=info msg="Ran pod sandbox 33693c0be286b037461b7311792d4515f90573b8b968a72580d7fbe79d859a4b with infra container: default/busybox/POD" id=fbc52a54-8722-4f31-9959-a5f122ea2826 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.753614443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a016c365-27f8-4cba-b521-80da5fa1949b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.753801408Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a016c365-27f8-4cba-b521-80da5fa1949b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.753866797Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a016c365-27f8-4cba-b521-80da5fa1949b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.754490308Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=296c0bac-5705-45fe-9a05-cfa42f947364 name=/runtime.v1.ImageService/PullImage
	Dec 13 13:45:01 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:01.757073994Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.445109285Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=296c0bac-5705-45fe-9a05-cfa42f947364 name=/runtime.v1.ImageService/PullImage
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.446033175Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8ee3444-176b-4838-86bd-86b2e80f4b5e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.447845548Z" level=info msg="Creating container: default/busybox/busybox" id=d260fb60-6e8b-4af8-8e5a-c028d07722f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.44800231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.452861237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.453396724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.486991879Z" level=info msg="Created container 429aea38fe28cf3eb5ea100b39e433a725ff470fc1c5c754aa7328ed8d472ac7: default/busybox/busybox" id=d260fb60-6e8b-4af8-8e5a-c028d07722f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.487588512Z" level=info msg="Starting container: 429aea38fe28cf3eb5ea100b39e433a725ff470fc1c5c754aa7328ed8d472ac7" id=bad2fe13-c344-4ecb-ba6f-11aef945a3cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:02 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:02.489352176Z" level=info msg="Started container" PID=2214 containerID=429aea38fe28cf3eb5ea100b39e433a725ff470fc1c5c754aa7328ed8d472ac7 description=default/busybox/busybox id=bad2fe13-c344-4ecb-ba6f-11aef945a3cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=33693c0be286b037461b7311792d4515f90573b8b968a72580d7fbe79d859a4b
	Dec 13 13:45:09 old-k8s-version-417583 crio[772]: time="2025-12-13T13:45:09.545849254Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	429aea38fe28c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   33693c0be286b       busybox                                          default
	6859d9e2d36a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   913ad05715c42       coredns-5dd5756b68-88x45                         kube-system
	ac3f9270d0105       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   f02443ca7e25f       storage-provisioner                              kube-system
	a1b174889ef5e       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   8b6d571e3c0b4       kindnet-qnxmc                                    kube-system
	f00ecbc7d380e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   03f05a0791acb       kube-proxy-r84xd                                 kube-system
	0a16999947a8d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   0a8d0026a5e2c       kube-controller-manager-old-k8s-version-417583   kube-system
	2a3a999e2deca       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   2766e7f1db651       kube-scheduler-old-k8s-version-417583            kube-system
	57b039ab97f8d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   8de2259a59b08       etcd-old-k8s-version-417583                      kube-system
	14330e55bc4f4       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   d95738c2465b6       kube-apiserver-old-k8s-version-417583            kube-system
	
	
	==> coredns [6859d9e2d36a55701374e83ee3086a571c05df76701b25ee14371b8e34e57344] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41838 - 45423 "HINFO IN 2549100815548738314.8702802067651403777. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.908600657s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-417583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-417583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=old-k8s-version-417583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-417583
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:45:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:45:03 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:45:03 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:45:03 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:45:03 +0000   Sat, 13 Dec 2025 13:44:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-417583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                40103729-700e-4b92-90bd-81879b0deff9
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-88x45                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-417583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-qnxmc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-417583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-417583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-r84xd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-417583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-417583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node old-k8s-version-417583 event: Registered Node old-k8s-version-417583 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-417583 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [57b039ab97f8de1204a9d0819ffc4cad19e4b335e323f93e5e095a620c53ffb8] <==
	{"level":"info","ts":"2025-12-13T13:44:27.640297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-13T13:44:27.64044Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-13T13:44:27.641459Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-13T13:44:27.641546Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T13:44:27.64159Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T13:44:27.641753Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-13T13:44:27.641827Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-13T13:44:27.729803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-13T13:44:27.729852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-13T13:44:27.72989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-13T13:44:27.729919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-13T13:44:27.72993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T13:44:27.729947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-13T13:44:27.729972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T13:44:27.730814Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-417583 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T13:44:27.730805Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:44:27.73087Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:44:27.731137Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:44:27.731501Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:44:27.73161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:44:27.731646Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:44:27.732005Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T13:44:27.732325Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T13:44:27.73247Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-13T13:44:27.734639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 13:45:11 up  2:27,  0 user,  load average: 5.43, 3.69, 2.37
	Linux old-k8s-version-417583 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1b174889ef5e643944fde6731ca2f7f986d251c0f402b0da9a18ee5b07f07b9] <==
	I1213 13:44:48.109316       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:44:48.109543       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 13:44:48.109694       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:44:48.109709       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:44:48.109718       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:44:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:44:48.311494       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:44:48.311592       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:44:48.311606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:44:48.311766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:44:48.712422       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:44:48.712454       1 metrics.go:72] Registering metrics
	I1213 13:44:48.712661       1 controller.go:711] "Syncing nftables rules"
	I1213 13:44:58.315882       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:44:58.315945       1 main.go:301] handling current node
	I1213 13:45:08.314532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:45:08.314575       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14330e55bc4f433e30bfe6823d3b28150bfcd05da410f8a9d4c51c87ab87772b] <==
	I1213 13:44:29.390447       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 13:44:29.391731       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 13:44:29.393479       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 13:44:29.393506       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 13:44:29.393534       1 aggregator.go:166] initial CRD sync complete...
	I1213 13:44:29.393547       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 13:44:29.393554       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:44:29.393560       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:44:29.580522       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:44:30.295502       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 13:44:30.298908       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:44:30.298929       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:44:30.696093       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:44:30.730767       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:44:30.799534       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:44:30.805451       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1213 13:44:30.806700       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 13:44:30.810860       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:44:31.344766       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 13:44:32.355631       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 13:44:32.366338       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:44:32.376195       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1213 13:44:44.960620       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 13:44:45.259621       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:44:45.259621       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0a16999947a8dad04af6875a6479784eb08cc4b63198924246515dee56bcee82] <==
	I1213 13:44:44.757257       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1213 13:44:44.758476       1 shared_informer.go:318] Caches are synced for GC
	I1213 13:44:44.758502       1 shared_informer.go:318] Caches are synced for persistent volume
	I1213 13:44:44.759822       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1213 13:44:44.873150       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 13:44:44.949357       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 13:44:44.965391       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1213 13:44:45.276205       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 13:44:45.283881       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qnxmc"
	I1213 13:44:45.285740       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r84xd"
	I1213 13:44:45.305257       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 13:44:45.305288       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 13:44:45.470187       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1213 13:44:45.717133       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nlrcs"
	I1213 13:44:45.722507       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-88x45"
	I1213 13:44:45.739700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="773.838012ms"
	I1213 13:44:45.746936       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nlrcs"
	I1213 13:44:45.752539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.782643ms"
	I1213 13:44:45.758621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.016338ms"
	I1213 13:44:45.758755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.923µs"
	I1213 13:44:58.491502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.652µs"
	I1213 13:44:58.508061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.821µs"
	I1213 13:44:59.526734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.13467ms"
	I1213 13:44:59.526898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.585µs"
	I1213 13:44:59.711939       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [f00ecbc7d380ed7c2a22bdf91bff66ec78b4aa6a1e2b3fe08ef8c5ff08cea6c6] <==
	I1213 13:44:46.005676       1 server_others.go:69] "Using iptables proxy"
	I1213 13:44:46.016823       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1213 13:44:46.039946       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:44:46.042914       1 server_others.go:152] "Using iptables Proxier"
	I1213 13:44:46.042952       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 13:44:46.042961       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 13:44:46.042987       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 13:44:46.043206       1 server.go:846] "Version info" version="v1.28.0"
	I1213 13:44:46.043218       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:44:46.045040       1 config.go:97] "Starting endpoint slice config controller"
	I1213 13:44:46.045076       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 13:44:46.045079       1 config.go:188] "Starting service config controller"
	I1213 13:44:46.045112       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 13:44:46.045289       1 config.go:315] "Starting node config controller"
	I1213 13:44:46.045312       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 13:44:46.145772       1 shared_informer.go:318] Caches are synced for node config
	I1213 13:44:46.145809       1 shared_informer.go:318] Caches are synced for service config
	I1213 13:44:46.145834       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2a3a999e2deca8006754aef58ebe4c8a41221e2c8b50c63875d699ddaf0a6742] <==
	W1213 13:44:29.355319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 13:44:29.355578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1213 13:44:29.355709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 13:44:29.355740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1213 13:44:29.355853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 13:44:29.355884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 13:44:29.356102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 13:44:29.356112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 13:44:29.356123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 13:44:29.356131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1213 13:44:30.190927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 13:44:30.191036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1213 13:44:30.269660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 13:44:30.269694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1213 13:44:30.287055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 13:44:30.287087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 13:44:30.300470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 13:44:30.300510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1213 13:44:30.362722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 13:44:30.362766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 13:44:30.488742       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 13:44:30.488813       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:44:30.565625       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 13:44:30.565658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1213 13:44:32.249701       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 13:44:44 old-k8s-version-417583 kubelet[1400]: I1213 13:44:44.733349    1400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.304716    1400 topology_manager.go:215] "Topology Admit Handler" podUID="312f48e9-7f92-45ef-9351-a4565768c8b0" podNamespace="kube-system" podName="kindnet-qnxmc"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.306570    1400 topology_manager.go:215] "Topology Admit Handler" podUID="c7442d18-20f9-4a27-b079-4e86e2804ab7" podNamespace="kube-system" podName="kube-proxy-r84xd"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504583    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7442d18-20f9-4a27-b079-4e86e2804ab7-kube-proxy\") pod \"kube-proxy-r84xd\" (UID: \"c7442d18-20f9-4a27-b079-4e86e2804ab7\") " pod="kube-system/kube-proxy-r84xd"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504646    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/312f48e9-7f92-45ef-9351-a4565768c8b0-cni-cfg\") pod \"kindnet-qnxmc\" (UID: \"312f48e9-7f92-45ef-9351-a4565768c8b0\") " pod="kube-system/kindnet-qnxmc"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504686    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/312f48e9-7f92-45ef-9351-a4565768c8b0-xtables-lock\") pod \"kindnet-qnxmc\" (UID: \"312f48e9-7f92-45ef-9351-a4565768c8b0\") " pod="kube-system/kindnet-qnxmc"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504717    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312f48e9-7f92-45ef-9351-a4565768c8b0-lib-modules\") pod \"kindnet-qnxmc\" (UID: \"312f48e9-7f92-45ef-9351-a4565768c8b0\") " pod="kube-system/kindnet-qnxmc"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504749    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgxt\" (UniqueName: \"kubernetes.io/projected/312f48e9-7f92-45ef-9351-a4565768c8b0-kube-api-access-scgxt\") pod \"kindnet-qnxmc\" (UID: \"312f48e9-7f92-45ef-9351-a4565768c8b0\") " pod="kube-system/kindnet-qnxmc"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504794    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7442d18-20f9-4a27-b079-4e86e2804ab7-xtables-lock\") pod \"kube-proxy-r84xd\" (UID: \"c7442d18-20f9-4a27-b079-4e86e2804ab7\") " pod="kube-system/kube-proxy-r84xd"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504834    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7442d18-20f9-4a27-b079-4e86e2804ab7-lib-modules\") pod \"kube-proxy-r84xd\" (UID: \"c7442d18-20f9-4a27-b079-4e86e2804ab7\") " pod="kube-system/kube-proxy-r84xd"
	Dec 13 13:44:45 old-k8s-version-417583 kubelet[1400]: I1213 13:44:45.504865    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdljn\" (UniqueName: \"kubernetes.io/projected/c7442d18-20f9-4a27-b079-4e86e2804ab7-kube-api-access-sdljn\") pod \"kube-proxy-r84xd\" (UID: \"c7442d18-20f9-4a27-b079-4e86e2804ab7\") " pod="kube-system/kube-proxy-r84xd"
	Dec 13 13:44:46 old-k8s-version-417583 kubelet[1400]: I1213 13:44:46.477527    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r84xd" podStartSLOduration=1.477471242 podCreationTimestamp="2025-12-13 13:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:44:46.477241228 +0000 UTC m=+14.145386820" watchObservedRunningTime="2025-12-13 13:44:46.477471242 +0000 UTC m=+14.145616832"
	Dec 13 13:44:52 old-k8s-version-417583 kubelet[1400]: I1213 13:44:52.434758    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-qnxmc" podStartSLOduration=5.439499581 podCreationTimestamp="2025-12-13 13:44:45 +0000 UTC" firstStartedPulling="2025-12-13 13:44:45.917726032 +0000 UTC m=+13.585871615" lastFinishedPulling="2025-12-13 13:44:47.912930872 +0000 UTC m=+15.581076454" observedRunningTime="2025-12-13 13:44:48.489958163 +0000 UTC m=+16.158103755" watchObservedRunningTime="2025-12-13 13:44:52.43470442 +0000 UTC m=+20.102850042"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.469290    1400 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.490314    1400 topology_manager.go:215] "Topology Admit Handler" podUID="8abf66a9-bb12-4644-b554-8a3ba8ce489a" podNamespace="kube-system" podName="storage-provisioner"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.491807    1400 topology_manager.go:215] "Topology Admit Handler" podUID="eb8ec109-32d0-4f39-a94c-f5e8190aa012" podNamespace="kube-system" podName="coredns-5dd5756b68-88x45"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.596988    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2zw\" (UniqueName: \"kubernetes.io/projected/eb8ec109-32d0-4f39-a94c-f5e8190aa012-kube-api-access-gt2zw\") pod \"coredns-5dd5756b68-88x45\" (UID: \"eb8ec109-32d0-4f39-a94c-f5e8190aa012\") " pod="kube-system/coredns-5dd5756b68-88x45"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.597040    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8abf66a9-bb12-4644-b554-8a3ba8ce489a-tmp\") pod \"storage-provisioner\" (UID: \"8abf66a9-bb12-4644-b554-8a3ba8ce489a\") " pod="kube-system/storage-provisioner"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.597070    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb8ec109-32d0-4f39-a94c-f5e8190aa012-config-volume\") pod \"coredns-5dd5756b68-88x45\" (UID: \"eb8ec109-32d0-4f39-a94c-f5e8190aa012\") " pod="kube-system/coredns-5dd5756b68-88x45"
	Dec 13 13:44:58 old-k8s-version-417583 kubelet[1400]: I1213 13:44:58.597228    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtr8z\" (UniqueName: \"kubernetes.io/projected/8abf66a9-bb12-4644-b554-8a3ba8ce489a-kube-api-access-rtr8z\") pod \"storage-provisioner\" (UID: \"8abf66a9-bb12-4644-b554-8a3ba8ce489a\") " pod="kube-system/storage-provisioner"
	Dec 13 13:44:59 old-k8s-version-417583 kubelet[1400]: I1213 13:44:59.509563    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.509497737 podCreationTimestamp="2025-12-13 13:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:44:59.509248837 +0000 UTC m=+27.177394428" watchObservedRunningTime="2025-12-13 13:44:59.509497737 +0000 UTC m=+27.177643367"
	Dec 13 13:44:59 old-k8s-version-417583 kubelet[1400]: I1213 13:44:59.519303    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-88x45" podStartSLOduration=14.519252569 podCreationTimestamp="2025-12-13 13:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:44:59.518935453 +0000 UTC m=+27.187081078" watchObservedRunningTime="2025-12-13 13:44:59.519252569 +0000 UTC m=+27.187398160"
	Dec 13 13:45:01 old-k8s-version-417583 kubelet[1400]: I1213 13:45:01.433304    1400 topology_manager.go:215] "Topology Admit Handler" podUID="62a21653-fb04-4160-81ac-a3647bfa3884" podNamespace="default" podName="busybox"
	Dec 13 13:45:01 old-k8s-version-417583 kubelet[1400]: I1213 13:45:01.617413    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25lb\" (UniqueName: \"kubernetes.io/projected/62a21653-fb04-4160-81ac-a3647bfa3884-kube-api-access-k25lb\") pod \"busybox\" (UID: \"62a21653-fb04-4160-81ac-a3647bfa3884\") " pod="default/busybox"
	Dec 13 13:45:02 old-k8s-version-417583 kubelet[1400]: I1213 13:45:02.518188    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.826789627 podCreationTimestamp="2025-12-13 13:45:01 +0000 UTC" firstStartedPulling="2025-12-13 13:45:01.754098856 +0000 UTC m=+29.422244439" lastFinishedPulling="2025-12-13 13:45:02.445439946 +0000 UTC m=+30.113585529" observedRunningTime="2025-12-13 13:45:02.517763448 +0000 UTC m=+30.185909040" watchObservedRunningTime="2025-12-13 13:45:02.518130717 +0000 UTC m=+30.186276309"
	
	
	==> storage-provisioner [ac3f9270d01057bf4ca9e051c5e216951c639d8d679b2c4dc223c715385d57b1] <==
	I1213 13:44:58.849857       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:44:58.859382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:44:58.859447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 13:44:58.866077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:44:58.866204       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-417583_33a3e120-c44f-4c7e-aad5-970bc48eea41!
	I1213 13:44:58.866455       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0650e3a6-aaf8-4fe6-b96a-06ebf14116a7", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-417583_33a3e120-c44f-4c7e-aad5-970bc48eea41 became leader
	I1213 13:44:58.967232       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-417583_33a3e120-c44f-4c7e-aad5-970bc48eea41!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417583 -n old-k8s-version-417583
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-417583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (288.602858ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:45:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-992258 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-992258 describe deploy/metrics-server -n kube-system: exit status 1 (75.998563ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-992258 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-992258
helpers_test.go:244: (dbg) docker inspect no-preload-992258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7",
	        "Created": "2025-12-13T13:44:34.580077423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 699514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:44:34.618078857Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/hosts",
	        "LogPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7-json.log",
	        "Name": "/no-preload-992258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-992258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-992258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7",
	                "LowerDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-992258",
	                "Source": "/var/lib/docker/volumes/no-preload-992258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-992258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-992258",
	                "name.minikube.sigs.k8s.io": "no-preload-992258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "54ad20e92539879969239c70881f97aee62932d52aecedc6d9563d7d2e473292",
	            "SandboxKey": "/var/run/docker/netns/54ad20e92539",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-992258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6b03146af25791542829a33be34e6cdd463680d204ddd7fe7766c21dca4ab829",
	                    "EndpointID": "77e9f46ae52cc723a953a9be114383443ef5d82d88ccfa94ad1b5fbf8be620dd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "4a:99:af:d2:5c:82",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-992258",
	                        "1ee238da5195"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-992258 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-992258 logs -n 25: (1.117709193s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-884214 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo docker system info                                                                                                                                                                                                      │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo containerd config dump                                                                                                                                                                                                  │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                               │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:45:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:45:28.684396  717532 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:45:28.684702  717532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:45:28.684717  717532 out.go:374] Setting ErrFile to fd 2...
	I1213 13:45:28.684724  717532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:45:28.685101  717532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:45:28.685735  717532 out.go:368] Setting JSON to false
	I1213 13:45:28.687295  717532 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8877,"bootTime":1765624652,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:45:28.687363  717532 start.go:143] virtualization: kvm guest
	I1213 13:45:28.690114  717532 out.go:179] * [old-k8s-version-417583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:45:28.691309  717532 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:45:28.691367  717532 notify.go:221] Checking for updates...
	I1213 13:45:28.696656  717532 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:45:28.697886  717532 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:45:28.698983  717532 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:45:28.700928  717532 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:45:28.703267  717532 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:45:28.705174  717532 config.go:182] Loaded profile config "old-k8s-version-417583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 13:45:28.707456  717532 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1213 13:45:28.709011  717532 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:45:28.750068  717532 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:45:28.750193  717532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:45:28.835604  717532 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:45:28.822118443 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:45:28.835796  717532 docker.go:319] overlay module found
	I1213 13:45:28.837617  717532 out.go:179] * Using the docker driver based on existing profile
	I1213 13:45:28.838687  717532 start.go:309] selected driver: docker
	I1213 13:45:28.838708  717532 start.go:927] validating driver "docker" against &{Name:old-k8s-version-417583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:28.838817  717532 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:45:28.839662  717532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:45:28.911843  717532 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:45:28.901811515 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:45:28.912108  717532 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:45:28.912133  717532 cni.go:84] Creating CNI manager for ""
	I1213 13:45:28.912188  717532 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:45:28.912217  717532 start.go:353] cluster config:
	{Name:old-k8s-version-417583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:28.913795  717532 out.go:179] * Starting "old-k8s-version-417583" primary control-plane node in "old-k8s-version-417583" cluster
	I1213 13:45:28.915443  717532 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:45:28.917403  717532 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	
	
	==> CRI-O <==
	Dec 13 13:45:17 no-preload-992258 crio[772]: time="2025-12-13T13:45:17.354338727Z" level=info msg="Starting container: 52219c19c82425d3ef79b4068a9d85288b8ca0fd09f3f564f2ceb6524ca11d6f" id=487de18e-87a9-4715-9be4-fdfd5b50e2c3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:17 no-preload-992258 crio[772]: time="2025-12-13T13:45:17.356332778Z" level=info msg="Started container" PID=2839 containerID=52219c19c82425d3ef79b4068a9d85288b8ca0fd09f3f564f2ceb6524ca11d6f description=kube-system/coredns-7d764666f9-qfkgp/coredns id=487de18e-87a9-4715-9be4-fdfd5b50e2c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f00ff7fc4b111e91160638e3abb48c0ccd4ef2461b1644d5d53976e200195449
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.937636636Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c558f2b8-a71f-44db-9d25-09555619ce3e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.937719309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.943309129Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:24834614c410d9ce1d5173a81f7f7944ac98135c7bcca22a02a609318e767e4d UID:80cbe112-02fa-49c6-8738-accc0daffc9c NetNS:/var/run/netns/666a78ae-6033-4e6f-8550-a8a30fc88743 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a98368}] Aliases:map[]}"
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.94336115Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.954594024Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:24834614c410d9ce1d5173a81f7f7944ac98135c7bcca22a02a609318e767e4d UID:80cbe112-02fa-49c6-8738-accc0daffc9c NetNS:/var/run/netns/666a78ae-6033-4e6f-8550-a8a30fc88743 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a98368}] Aliases:map[]}"
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.954789785Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.955651148Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.956605603Z" level=info msg="Ran pod sandbox 24834614c410d9ce1d5173a81f7f7944ac98135c7bcca22a02a609318e767e4d with infra container: default/busybox/POD" id=c558f2b8-a71f-44db-9d25-09555619ce3e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.957872936Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f36fa4c1-a8db-4a7c-a3f1-1e5781692f19 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.958008491Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f36fa4c1-a8db-4a7c-a3f1-1e5781692f19 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.958061643Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f36fa4c1-a8db-4a7c-a3f1-1e5781692f19 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.958883574Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4dedda26-07ab-4427-afa0-ce5c5441f8db name=/runtime.v1.ImageService/PullImage
	Dec 13 13:45:19 no-preload-992258 crio[772]: time="2025-12-13T13:45:19.96032148Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.634040869Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4dedda26-07ab-4427-afa0-ce5c5441f8db name=/runtime.v1.ImageService/PullImage
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.634706508Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7c5dcbbc-c52a-4ef5-9358-7c1d590fa69b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.636476162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db598d31-e05d-457a-88d1-f883b048fa26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.640233547Z" level=info msg="Creating container: default/busybox/busybox" id=f766617f-021a-4d00-9ed7-8cb5bcbd4aeb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.64036529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.643693312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.644131366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.674586167Z" level=info msg="Created container 3b660f4dad2d55afe08c380210fb1c1ba888e4160fe0a615694374751d066830: default/busybox/busybox" id=f766617f-021a-4d00-9ed7-8cb5bcbd4aeb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.675647587Z" level=info msg="Starting container: 3b660f4dad2d55afe08c380210fb1c1ba888e4160fe0a615694374751d066830" id=38acfe46-bab3-45f0-b46b-96b000228868 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:20 no-preload-992258 crio[772]: time="2025-12-13T13:45:20.67749356Z" level=info msg="Started container" PID=2919 containerID=3b660f4dad2d55afe08c380210fb1c1ba888e4160fe0a615694374751d066830 description=default/busybox/busybox id=38acfe46-bab3-45f0-b46b-96b000228868 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24834614c410d9ce1d5173a81f7f7944ac98135c7bcca22a02a609318e767e4d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3b660f4dad2d5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   24834614c410d       busybox                                     default
	52219c19c8242       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   f00ff7fc4b111       coredns-7d764666f9-qfkgp                    kube-system
	4137005091d28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   7984a1b38c545       storage-provisioner                         kube-system
	fec3a2e9fbc5f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   94e48394ac29c       kindnet-2n8ks                               kube-system
	54c03a8a2fc96       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   c3b6c76265ed9       kube-proxy-sjrzk                            kube-system
	b3be5a6766439       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   8f07abdf5a479       kube-scheduler-no-preload-992258            kube-system
	6a3b9c23498f0       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   c6b4df509217e       kube-controller-manager-no-preload-992258   kube-system
	c180d3fa33dba       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   4ece417e6ed48       kube-apiserver-no-preload-992258            kube-system
	2b651333db51e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   8b09ae418c80a       etcd-no-preload-992258                      kube-system
	
	
	==> coredns [52219c19c82425d3ef79b4068a9d85288b8ca0fd09f3f564f2ceb6524ca11d6f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51737 - 45713 "HINFO IN 5700945163938640567.7932485500237603517. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.089605714s
	
	
	==> describe nodes <==
	Name:               no-preload-992258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-992258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=no-preload-992258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_44_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:44:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-992258
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:45:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:45:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-992258
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                a54834e7-7b06-490e-bc63-9fe908fc9136
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-qfkgp                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-992258                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-2n8ks                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-992258             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-992258    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-sjrzk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-992258             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-992258 event: Registered Node no-preload-992258 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [2b651333db51eb8d5277fd0d0aac078481edbde7b69b6965f70857f00d7c3872] <==
	{"level":"warn","ts":"2025-12-13T13:44:56.256459Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.699356ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-13T13:44:56.256524Z","caller":"traceutil/trace.go:172","msg":"trace[1057433852] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:40; }","duration":"111.778098ms","start":"2025-12-13T13:44:56.144731Z","end":"2025-12-13T13:44:56.256509Z","steps":["trace[1057433852] 'range keys from in-memory index tree'  (duration: 110.744042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:44:56.256551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.802688ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597697760466643 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io\" mod_revision:0 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io\" value_size:880 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-13T13:44:56.256703Z","caller":"traceutil/trace.go:172","msg":"trace[1033842470] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"175.217824ms","start":"2025-12-13T13:44:56.081472Z","end":"2025-12-13T13:44:56.256690Z","steps":["trace[1033842470] 'process raft request'  (duration: 175.159993ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:44:56.256728Z","caller":"traceutil/trace.go:172","msg":"trace[626246857] transaction","detail":"{read_only:false; response_revision:41; number_of_response:1; }","duration":"175.830545ms","start":"2025-12-13T13:44:56.080881Z","end":"2025-12-13T13:44:56.256712Z","steps":["trace[626246857] 'process raft request'  (duration: 64.813536ms)","trace[626246857] 'compare'  (duration: 110.684412ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:44:56.387697Z","caller":"traceutil/trace.go:172","msg":"trace[832484904] linearizableReadLoop","detail":"{readStateIndex:46; appliedIndex:46; }","duration":"106.352223ms","start":"2025-12-13T13:44:56.281319Z","end":"2025-12-13T13:44:56.387671Z","steps":["trace[832484904] 'read index received'  (duration: 106.345264ms)","trace[832484904] 'applied index is now lower than readState.Index'  (duration: 5.691µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:44:56.500844Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"219.498062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-12-13T13:44:56.500923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.166139ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597697760466646 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/system-nodes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/system-nodes\" value_size:595 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-13T13:44:56.500927Z","caller":"traceutil/trace.go:172","msg":"trace[1334947843] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:0; response_revision:42; }","duration":"219.593165ms","start":"2025-12-13T13:44:56.281311Z","end":"2025-12-13T13:44:56.500904Z","steps":["trace[1334947843] 'agreement among raft nodes before linearized reading'  (duration: 106.431771ms)","trace[1334947843] 'range keys from in-memory index tree'  (duration: 113.006715ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:44:56.500968Z","caller":"traceutil/trace.go:172","msg":"trace[467815112] linearizableReadLoop","detail":"{readStateIndex:47; appliedIndex:46; }","duration":"113.211318ms","start":"2025-12-13T13:44:56.387749Z","end":"2025-12-13T13:44:56.500961Z","steps":["trace[467815112] 'read index received'  (duration: 50.502µs)","trace[467815112] 'applied index is now lower than readState.Index'  (duration: 113.160066ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:44:56.501013Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.279909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-13T13:44:56.501031Z","caller":"traceutil/trace.go:172","msg":"trace[1043782768] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:43; }","duration":"218.299438ms","start":"2025-12-13T13:44:56.282727Z","end":"2025-12-13T13:44:56.501026Z","steps":["trace[1043782768] 'agreement among raft nodes before linearized reading'  (duration: 218.260733ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:44:56.501025Z","caller":"traceutil/trace.go:172","msg":"trace[1201647327] transaction","detail":"{read_only:false; response_revision:43; number_of_response:1; }","duration":"241.651182ms","start":"2025-12-13T13:44:56.259352Z","end":"2025-12-13T13:44:56.501003Z","steps":["trace[1201647327] 'process raft request'  (duration: 128.36495ms)","trace[1201647327] 'compare'  (duration: 113.047776ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:44:56.501060Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.047913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-13T13:44:56.501094Z","caller":"traceutil/trace.go:172","msg":"trace[1185789915] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:43; }","duration":"118.089807ms","start":"2025-12-13T13:44:56.382997Z","end":"2025-12-13T13:44:56.501087Z","steps":["trace[1185789915] 'agreement among raft nodes before linearized reading'  (duration: 118.013018ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:44:56.613246Z","caller":"traceutil/trace.go:172","msg":"trace[2116269175] linearizableReadLoop","detail":"{readStateIndex:47; appliedIndex:47; }","duration":"107.997913ms","start":"2025-12-13T13:44:56.505227Z","end":"2025-12-13T13:44:56.613225Z","steps":["trace[2116269175] 'read index received'  (duration: 107.992074ms)","trace[2116269175] 'applied index is now lower than readState.Index'  (duration: 4.607µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:44:56.641964Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.704857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-13T13:44:56.642075Z","caller":"traceutil/trace.go:172","msg":"trace[2048489368] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:43; }","duration":"136.835659ms","start":"2025-12-13T13:44:56.505223Z","end":"2025-12-13T13:44:56.642059Z","steps":["trace[2048489368] 'agreement among raft nodes before linearized reading'  (duration: 108.066746ms)","trace[2048489368] 'range keys from in-memory index tree'  (duration: 28.606652ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:44:56.641963Z","caller":"traceutil/trace.go:172","msg":"trace[1662271490] transaction","detail":"{read_only:false; response_revision:44; number_of_response:1; }","duration":"138.477999ms","start":"2025-12-13T13:44:56.503466Z","end":"2025-12-13T13:44:56.641944Z","steps":["trace[1662271490] 'process raft request'  (duration: 109.866245ms)","trace[1662271490] 'compare'  (duration: 28.506761ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:44:56.735922Z","caller":"traceutil/trace.go:172","msg":"trace[182664799] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"232.291029ms","start":"2025-12-13T13:44:56.503610Z","end":"2025-12-13T13:44:56.735901Z","steps":["trace[182664799] 'process raft request'  (duration: 232.113066ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:44:56.735986Z","caller":"traceutil/trace.go:172","msg":"trace[2100046638] transaction","detail":"{read_only:false; response_revision:46; number_of_response:1; }","duration":"230.686444ms","start":"2025-12-13T13:44:56.505274Z","end":"2025-12-13T13:44:56.735960Z","steps":["trace[2100046638] 'process raft request'  (duration: 230.542118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T13:44:57.080277Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.40352ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597697760466695 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:571 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-13T13:44:57.080495Z","caller":"traceutil/trace.go:172","msg":"trace[444828026] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"122.801569ms","start":"2025-12-13T13:44:56.957680Z","end":"2025-12-13T13:44:57.080481Z","steps":["trace[444828026] 'process raft request'  (duration: 122.724448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:44:57.080509Z","caller":"traceutil/trace.go:172","msg":"trace[1086917890] transaction","detail":"{read_only:false; response_revision:73; number_of_response:1; }","duration":"122.842401ms","start":"2025-12-13T13:44:56.957646Z","end":"2025-12-13T13:44:57.080489Z","steps":["trace[1086917890] 'process raft request'  (duration: 17.169471ms)","trace[1086917890] 'compare'  (duration: 105.266918ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:45:27.573691Z","caller":"traceutil/trace.go:172","msg":"trace[140575756] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"139.653026ms","start":"2025-12-13T13:45:27.434015Z","end":"2025-12-13T13:45:27.573668Z","steps":["trace[140575756] 'process raft request'  (duration: 139.536659ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:45:29 up  2:27,  0 user,  load average: 6.38, 4.03, 2.51
	Linux no-preload-992258 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fec3a2e9fbc5fc8a45e5e58a3498a2e15bd26ea580d5079cdc7d071f1136b30c] <==
	I1213 13:45:06.479061       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:06.479350       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 13:45:06.479493       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:06.479507       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:06.479527       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:06.776289       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:06.776326       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:06.776338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:06.776645       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:06.976728       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:06.976759       1 metrics.go:72] Registering metrics
	I1213 13:45:06.977035       1 controller.go:711] "Syncing nftables rules"
	I1213 13:45:16.682606       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:45:16.682688       1 main.go:301] handling current node
	I1213 13:45:26.682948       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:45:26.682988       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c180d3fa33dba7d075756ab40a0aa9ac21ce36b070ff3992c1507cf91e705209] <==
	I1213 13:44:55.479877       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:44:55.554410       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1213 13:44:55.554430       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1213 13:44:55.554545       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	E1213 13:44:55.716167       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1213 13:44:55.716928       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:44:55.721086       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:44:56.643081       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1213 13:44:56.740917       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:44:56.740937       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 13:44:57.617346       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:44:57.657074       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:44:57.790468       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:44:57.801694       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1213 13:44:57.803280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:44:57.811837       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:44:58.304287       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:44:58.593437       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:44:58.603813       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:44:58.611076       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:45:03.856445       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:45:04.109163       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:45:04.209969       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:04.215575       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1213 13:45:28.007762       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:47716: use of closed network connection
	
	
	==> kube-controller-manager [6a3b9c23498f051930947557b109a84bcbf1e30456838041e6f2081132eb718e] <==
	I1213 13:45:03.116102       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116152       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116218       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.115735       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116226       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 13:45:03.116491       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-992258"
	I1213 13:45:03.116539       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 13:45:03.116278       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116232       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116215       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116896       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116953       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116967       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.116270       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.117031       1 range_allocator.go:177] "Sending events to api server"
	I1213 13:45:03.117076       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 13:45:03.117081       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:45:03.117093       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.125028       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.128960       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-992258" podCIDRs=["10.244.0.0/24"]
	I1213 13:45:03.214734       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:03.214752       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:45:03.214759       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 13:45:03.216866       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:18.118674       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [54c03a8a2fc96d84584cf91cf9b3e1fa7ffffd9ef09fcb9948d9b030e900f85c] <==
	I1213 13:45:04.557045       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:45:04.616811       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:45:04.717860       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:04.717914       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 13:45:04.718019       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:45:04.741221       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:04.741287       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:45:04.746812       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:45:04.747258       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:45:04.747274       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:04.751749       1 config.go:200] "Starting service config controller"
	I1213 13:45:04.751771       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:45:04.751769       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:45:04.751831       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:45:04.751829       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:45:04.751847       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:45:04.751857       1 config.go:309] "Starting node config controller"
	I1213 13:45:04.751862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:45:04.751868       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:45:04.852550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:45:04.852574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:45:04.852599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b3be5a67664392ded94cd4c06317cef0501840c0e01fae084b752a33e7762706] <==
	E1213 13:44:56.409226       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 13:44:56.410139       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 13:44:56.418164       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1213 13:44:56.418971       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 13:44:56.421946       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1213 13:44:56.422710       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 13:44:56.430545       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1213 13:44:56.431289       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 13:44:56.434143       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1213 13:44:56.435003       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 13:44:56.480314       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:44:56.481286       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1213 13:44:56.542930       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:44:56.543834       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 13:44:56.606022       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1213 13:44:56.606934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 13:44:56.638523       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 13:44:56.639577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 13:44:56.658051       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1213 13:44:56.659277       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 13:44:56.693480       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1213 13:44:56.694442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1213 13:44:56.739138       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1213 13:44:56.740572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	I1213 13:44:58.832836       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 13:45:04 no-preload-992258 kubelet[2236]: I1213 13:45:04.253635    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/994b46c4-47b5-4866-b1d3-8e6f0fde5c93-kube-proxy\") pod \"kube-proxy-sjrzk\" (UID: \"994b46c4-47b5-4866-b1d3-8e6f0fde5c93\") " pod="kube-system/kube-proxy-sjrzk"
	Dec 13 13:45:04 no-preload-992258 kubelet[2236]: I1213 13:45:04.253662    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/994b46c4-47b5-4866-b1d3-8e6f0fde5c93-xtables-lock\") pod \"kube-proxy-sjrzk\" (UID: \"994b46c4-47b5-4866-b1d3-8e6f0fde5c93\") " pod="kube-system/kube-proxy-sjrzk"
	Dec 13 13:45:04 no-preload-992258 kubelet[2236]: I1213 13:45:04.253686    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8npr\" (UniqueName: \"kubernetes.io/projected/994b46c4-47b5-4866-b1d3-8e6f0fde5c93-kube-api-access-h8npr\") pod \"kube-proxy-sjrzk\" (UID: \"994b46c4-47b5-4866-b1d3-8e6f0fde5c93\") " pod="kube-system/kube-proxy-sjrzk"
	Dec 13 13:45:04 no-preload-992258 kubelet[2236]: E1213 13:45:04.638956    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-992258" containerName="kube-controller-manager"
	Dec 13 13:45:04 no-preload-992258 kubelet[2236]: E1213 13:45:04.727322    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-992258" containerName="kube-apiserver"
	Dec 13 13:45:05 no-preload-992258 kubelet[2236]: E1213 13:45:05.428887    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-992258" containerName="etcd"
	Dec 13 13:45:05 no-preload-992258 kubelet[2236]: I1213 13:45:05.474037    2236 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-sjrzk" podStartSLOduration=1.474020326 podStartE2EDuration="1.474020326s" podCreationTimestamp="2025-12-13 13:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:05.473054529 +0000 UTC m=+7.140516368" watchObservedRunningTime="2025-12-13 13:45:05.474020326 +0000 UTC m=+7.141482156"
	Dec 13 13:45:06 no-preload-992258 kubelet[2236]: E1213 13:45:06.035121    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-992258" containerName="kube-scheduler"
	Dec 13 13:45:14 no-preload-992258 kubelet[2236]: E1213 13:45:14.647305    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-992258" containerName="kube-controller-manager"
	Dec 13 13:45:14 no-preload-992258 kubelet[2236]: I1213 13:45:14.668588    2236 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-2n8ks" podStartSLOduration=8.870856916 podStartE2EDuration="10.668557343s" podCreationTimestamp="2025-12-13 13:45:04 +0000 UTC" firstStartedPulling="2025-12-13 13:45:04.450058248 +0000 UTC m=+6.117520080" lastFinishedPulling="2025-12-13 13:45:06.247758687 +0000 UTC m=+7.915220507" observedRunningTime="2025-12-13 13:45:06.471987695 +0000 UTC m=+8.139449537" watchObservedRunningTime="2025-12-13 13:45:14.668557343 +0000 UTC m=+16.336019181"
	Dec 13 13:45:14 no-preload-992258 kubelet[2236]: E1213 13:45:14.734543    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-992258" containerName="kube-apiserver"
	Dec 13 13:45:15 no-preload-992258 kubelet[2236]: E1213 13:45:15.429876    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-992258" containerName="etcd"
	Dec 13 13:45:16 no-preload-992258 kubelet[2236]: E1213 13:45:16.040804    2236 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-992258" containerName="kube-scheduler"
	Dec 13 13:45:16 no-preload-992258 kubelet[2236]: I1213 13:45:16.972895    2236 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: I1213 13:45:17.044532    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e45b3622-b224-4b7b-9c34-944ef33db069-config-volume\") pod \"coredns-7d764666f9-qfkgp\" (UID: \"e45b3622-b224-4b7b-9c34-944ef33db069\") " pod="kube-system/coredns-7d764666f9-qfkgp"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: I1213 13:45:17.044663    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/874ca672-a250-424b-94e2-6bdf29132823-tmp\") pod \"storage-provisioner\" (UID: \"874ca672-a250-424b-94e2-6bdf29132823\") " pod="kube-system/storage-provisioner"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: I1213 13:45:17.044706    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7mb\" (UniqueName: \"kubernetes.io/projected/874ca672-a250-424b-94e2-6bdf29132823-kube-api-access-lt7mb\") pod \"storage-provisioner\" (UID: \"874ca672-a250-424b-94e2-6bdf29132823\") " pod="kube-system/storage-provisioner"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: I1213 13:45:17.044854    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tttsh\" (UniqueName: \"kubernetes.io/projected/e45b3622-b224-4b7b-9c34-944ef33db069-kube-api-access-tttsh\") pod \"coredns-7d764666f9-qfkgp\" (UID: \"e45b3622-b224-4b7b-9c34-944ef33db069\") " pod="kube-system/coredns-7d764666f9-qfkgp"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: E1213 13:45:17.488185    2236 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-qfkgp" containerName="coredns"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: I1213 13:45:17.513819    2236 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-qfkgp" podStartSLOduration=13.513767389 podStartE2EDuration="13.513767389s" podCreationTimestamp="2025-12-13 13:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:17.504319773 +0000 UTC m=+19.171781612" watchObservedRunningTime="2025-12-13 13:45:17.513767389 +0000 UTC m=+19.181229227"
	Dec 13 13:45:17 no-preload-992258 kubelet[2236]: I1213 13:45:17.529336    2236 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.529317292 podStartE2EDuration="13.529317292s" podCreationTimestamp="2025-12-13 13:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:17.514358403 +0000 UTC m=+19.181820240" watchObservedRunningTime="2025-12-13 13:45:17.529317292 +0000 UTC m=+19.196779142"
	Dec 13 13:45:18 no-preload-992258 kubelet[2236]: E1213 13:45:18.494465    2236 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-qfkgp" containerName="coredns"
	Dec 13 13:45:19 no-preload-992258 kubelet[2236]: E1213 13:45:19.497420    2236 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-qfkgp" containerName="coredns"
	Dec 13 13:45:19 no-preload-992258 kubelet[2236]: I1213 13:45:19.661134    2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnn6n\" (UniqueName: \"kubernetes.io/projected/80cbe112-02fa-49c6-8738-accc0daffc9c-kube-api-access-cnn6n\") pod \"busybox\" (UID: \"80cbe112-02fa-49c6-8738-accc0daffc9c\") " pod="default/busybox"
	Dec 13 13:45:21 no-preload-992258 kubelet[2236]: I1213 13:45:21.513651    2236 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.8365038249999999 podStartE2EDuration="2.513633554s" podCreationTimestamp="2025-12-13 13:45:19 +0000 UTC" firstStartedPulling="2025-12-13 13:45:19.958487023 +0000 UTC m=+21.625948840" lastFinishedPulling="2025-12-13 13:45:20.635616736 +0000 UTC m=+22.303078569" observedRunningTime="2025-12-13 13:45:21.513577658 +0000 UTC m=+23.181039496" watchObservedRunningTime="2025-12-13 13:45:21.513633554 +0000 UTC m=+23.181095389"
	
	
	==> storage-provisioner [4137005091d284aac759ae6df58c430fbe84a285ec69cf6eb126ab48fcfecec7] <==
	I1213 13:45:17.367411       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:45:17.375883       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:45:17.375950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:45:17.378527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:17.384217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:45:17.384377       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:45:17.384482       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8639f770-ba62-4b85-93df-6f4c8eca72ae", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-992258_441cbe98-85da-4258-9396-5f0384506346 became leader
	I1213 13:45:17.384538       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-992258_441cbe98-85da-4258-9396-5f0384506346!
	W1213 13:45:17.389860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:17.395638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:45:17.484807       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-992258_441cbe98-85da-4258-9396-5f0384506346!
	W1213 13:45:19.401061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:19.406245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:21.409538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:21.414440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:23.417815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:23.421871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:25.425086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:25.429007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:27.431892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:27.574763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:29.578767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:29.584133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-992258 -n no-preload-992258
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-992258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (305.827493ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:45:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-973953 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-973953 describe deploy/metrics-server -n kube-system: exit status 1 (67.482317ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-973953 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-973953
helpers_test.go:244: (dbg) docker inspect embed-certs-973953:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4",
	        "Created": "2025-12-13T13:44:57.200288812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 707823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:44:57.304243071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/hosts",
	        "LogPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4-json.log",
	        "Name": "/embed-certs-973953",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-973953:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-973953",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4",
	                "LowerDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-973953",
	                "Source": "/var/lib/docker/volumes/embed-certs-973953/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-973953",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-973953",
	                "name.minikube.sigs.k8s.io": "embed-certs-973953",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2cf70dfcc55ab43e3b96bc48cee952050cf9cde7c3fcf483c7d85d4bcf4b0850",
	            "SandboxKey": "/var/run/docker/netns/2cf70dfcc55a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-973953": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdd21ce485b56ca4b32dd68df0837eaa769f5169ec1531dea2c7dd03d846c883",
	                    "EndpointID": "cb6a08f5affc49f60747eece8e1978455e11fbae28244277cfa5b178eb282a29",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:e4:43:bc:86:1e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-973953",
	                        "2417f9c18402"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-973953 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-973953 logs -n 25: (1.026204562s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-884214 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo docker system info                                                                                                                                                                                                      │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo containerd config dump                                                                                                                                                                                                  │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                               │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:45:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:45:28.684396  717532 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:45:28.684702  717532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:45:28.684717  717532 out.go:374] Setting ErrFile to fd 2...
	I1213 13:45:28.684724  717532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:45:28.685101  717532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:45:28.685735  717532 out.go:368] Setting JSON to false
	I1213 13:45:28.687295  717532 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8877,"bootTime":1765624652,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:45:28.687363  717532 start.go:143] virtualization: kvm guest
	I1213 13:45:28.690114  717532 out.go:179] * [old-k8s-version-417583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:45:28.691309  717532 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:45:28.691367  717532 notify.go:221] Checking for updates...
	I1213 13:45:28.696656  717532 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:45:28.697886  717532 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:45:28.698983  717532 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:45:28.700928  717532 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:45:28.703267  717532 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:45:28.705174  717532 config.go:182] Loaded profile config "old-k8s-version-417583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 13:45:28.707456  717532 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1213 13:45:28.709011  717532 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:45:28.750068  717532 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:45:28.750193  717532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:45:28.835604  717532 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:45:28.822118443 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:45:28.835796  717532 docker.go:319] overlay module found
	I1213 13:45:28.837617  717532 out.go:179] * Using the docker driver based on existing profile
	I1213 13:45:28.838687  717532 start.go:309] selected driver: docker
	I1213 13:45:28.838708  717532 start.go:927] validating driver "docker" against &{Name:old-k8s-version-417583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:28.838817  717532 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:45:28.839662  717532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:45:28.911843  717532 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:45:28.901811515 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:45:28.912108  717532 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:45:28.912133  717532 cni.go:84] Creating CNI manager for ""
	I1213 13:45:28.912188  717532 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:45:28.912217  717532 start.go:353] cluster config:
	{Name:old-k8s-version-417583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:28.913795  717532 out.go:179] * Starting "old-k8s-version-417583" primary control-plane node in "old-k8s-version-417583" cluster
	I1213 13:45:28.915443  717532 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:45:28.917403  717532 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:45:28.919062  717532 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 13:45:28.919102  717532 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:45:28.919119  717532 cache.go:65] Caching tarball of preloaded images
	I1213 13:45:28.919193  717532 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:45:28.919241  717532 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:45:28.919255  717532 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1213 13:45:28.919364  717532 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/config.json ...
	I1213 13:45:28.943034  717532 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:45:28.943057  717532 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:45:28.943079  717532 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:45:28.943123  717532 start.go:360] acquireMachinesLock for old-k8s-version-417583: {Name:mk9f0d3b267b189d449d3a52ae1671bf232a0ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:45:28.943198  717532 start.go:364] duration metric: took 49.73µs to acquireMachinesLock for "old-k8s-version-417583"
	I1213 13:45:28.943222  717532 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:45:28.943233  717532 fix.go:54] fixHost starting: 
	I1213 13:45:28.943519  717532 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:45:28.966406  717532 fix.go:112] recreateIfNeeded on old-k8s-version-417583: state=Stopped err=<nil>
	W1213 13:45:28.966594  717532 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:45:28.519983  716415 oci.go:144] the created container "default-k8s-diff-port-038239" has a running status.
	I1213 13:45:28.520018  716415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa...
	I1213 13:45:28.677707  716415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:45:28.717614  716415 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-038239 --format={{.State.Status}}
	I1213 13:45:28.752931  716415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:45:28.752958  716415 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-038239 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:45:28.819175  716415 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-038239 --format={{.State.Status}}
	I1213 13:45:28.844191  716415 machine.go:94] provisionDockerMachine start ...
	I1213 13:45:28.844313  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:28.867720  716415 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:28.868167  716415 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I1213 13:45:28.868317  716415 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:45:29.021029  716415 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038239
	
	I1213 13:45:29.021061  716415 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-038239"
	I1213 13:45:29.021128  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:29.044736  716415 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:29.045029  716415 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I1213 13:45:29.045050  716415 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-038239 && echo "default-k8s-diff-port-038239" | sudo tee /etc/hostname
	I1213 13:45:29.198627  716415 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-038239
	
	I1213 13:45:29.198728  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:29.222139  716415 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:29.222364  716415 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I1213 13:45:29.222382  716415 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-038239' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-038239/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-038239' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:45:29.364356  716415 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:45:29.364390  716415 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:45:29.364426  716415 ubuntu.go:190] setting up certificates
	I1213 13:45:29.364441  716415 provision.go:84] configureAuth start
	I1213 13:45:29.364517  716415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-038239
	I1213 13:45:29.389627  716415 provision.go:143] copyHostCerts
	I1213 13:45:29.389806  716415 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:45:29.389817  716415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:45:29.389902  716415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:45:29.390029  716415 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:45:29.390042  716415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:45:29.390088  716415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:45:29.390188  716415 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:45:29.390202  716415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:45:29.390254  716415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:45:29.390359  716415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-038239 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-038239 localhost minikube]
	I1213 13:45:29.463480  716415 provision.go:177] copyRemoteCerts
	I1213 13:45:29.463548  716415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:45:29.463596  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:29.488449  716415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:45:29.592157  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:45:29.614765  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1213 13:45:29.638612  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 13:45:29.658056  716415 provision.go:87] duration metric: took 293.591415ms to configureAuth
	I1213 13:45:29.658086  716415 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:45:29.658264  716415 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:45:29.658375  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:29.678234  716415 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:29.678480  716415 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I1213 13:45:29.678503  716415 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:45:29.986324  716415 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:45:29.986350  716415 machine.go:97] duration metric: took 1.142128658s to provisionDockerMachine
	I1213 13:45:29.986361  716415 client.go:176] duration metric: took 6.303837759s to LocalClient.Create
	I1213 13:45:29.986382  716415 start.go:167] duration metric: took 6.30391074s to libmachine.API.Create "default-k8s-diff-port-038239"
	I1213 13:45:29.986392  716415 start.go:293] postStartSetup for "default-k8s-diff-port-038239" (driver="docker")
	I1213 13:45:29.986407  716415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:45:29.986471  716415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:45:29.986521  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:30.007822  716415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:45:30.107740  716415 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:45:30.111477  716415 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:45:30.111510  716415 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:45:30.111524  716415 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:45:30.111578  716415 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:45:30.111695  716415 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:45:30.111857  716415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:45:30.120411  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:45:30.142917  716415 start.go:296] duration metric: took 156.510701ms for postStartSetup
	I1213 13:45:30.143312  716415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-038239
	I1213 13:45:30.160947  716415 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/config.json ...
	I1213 13:45:30.161181  716415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:45:30.161221  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:30.180317  716415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:45:30.277799  716415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:45:30.284316  716415 start.go:128] duration metric: took 6.603866322s to createHost
	I1213 13:45:30.284345  716415 start.go:83] releasing machines lock for "default-k8s-diff-port-038239", held for 6.604016756s
	I1213 13:45:30.284492  716415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-038239
	I1213 13:45:30.310981  716415 ssh_runner.go:195] Run: cat /version.json
	I1213 13:45:30.311066  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:30.310942  716415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:45:30.311330  716415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:45:30.336865  716415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:45:30.337893  716415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:45:30.437661  716415 ssh_runner.go:195] Run: systemctl --version
	I1213 13:45:30.497685  716415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:45:30.534857  716415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:45:30.539597  716415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:45:30.539655  716415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:45:30.566133  716415 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:45:30.566162  716415 start.go:496] detecting cgroup driver to use...
	I1213 13:45:30.566200  716415 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:45:30.566251  716415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:45:30.582457  716415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:45:30.594715  716415 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:45:30.594799  716415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:45:30.611231  716415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:45:30.628175  716415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:45:30.744819  716415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:45:30.843754  716415 docker.go:234] disabling docker service ...
	I1213 13:45:30.843819  716415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:45:30.862094  716415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:45:30.874277  716415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:45:30.959579  716415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:45:31.040147  716415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:45:31.052416  716415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:45:31.066219  716415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:45:31.066270  716415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.075985  716415 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:45:31.076048  716415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.084458  716415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.092926  716415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.101229  716415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:45:31.108880  716415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.117233  716415 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.130132  716415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:31.138348  716415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:45:31.145306  716415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:45:31.152303  716415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:31.234238  716415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:45:31.368427  716415 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:45:31.368506  716415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:45:31.372584  716415 start.go:564] Will wait 60s for crictl version
	I1213 13:45:31.372635  716415 ssh_runner.go:195] Run: which crictl
	I1213 13:45:31.376162  716415 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:45:31.401405  716415 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:45:31.401483  716415 ssh_runner.go:195] Run: crio --version
	I1213 13:45:31.429355  716415 ssh_runner.go:195] Run: crio --version
	I1213 13:45:31.458724  716415 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 13:45:27.676952  706714 node_ready.go:57] node "embed-certs-973953" has "Ready":"False" status (will retry)
	I1213 13:45:30.117094  706714 node_ready.go:49] node "embed-certs-973953" is "Ready"
	I1213 13:45:30.117116  706714 node_ready.go:38] duration metric: took 10.50239694s for node "embed-certs-973953" to be "Ready" ...
	I1213 13:45:30.117129  706714 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:45:30.117173  706714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:45:30.129352  706714 api_server.go:72] duration metric: took 10.868935646s to wait for apiserver process to appear ...
	I1213 13:45:30.129373  706714 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:45:30.129389  706714 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1213 13:45:30.134856  706714 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1213 13:45:30.135829  706714 api_server.go:141] control plane version: v1.34.2
	I1213 13:45:30.135857  706714 api_server.go:131] duration metric: took 6.476837ms to wait for apiserver health ...
	I1213 13:45:30.135868  706714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:45:30.139278  706714 system_pods.go:59] 8 kube-system pods found
	I1213 13:45:30.139322  706714 system_pods.go:61] "coredns-66bc5c9577-bl59n" [b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:45:30.139332  706714 system_pods.go:61] "etcd-embed-certs-973953" [a9272b08-1c70-4ee9-9342-8918a71f2072] Running
	I1213 13:45:30.139340  706714 system_pods.go:61] "kindnet-bw5d4" [71bc6690-d0e7-4a63-a26e-fcee1b63c294] Running
	I1213 13:45:30.139346  706714 system_pods.go:61] "kube-apiserver-embed-certs-973953" [0df9cf2a-cf21-4f12-b58c-daf6852a4579] Running
	I1213 13:45:30.139355  706714 system_pods.go:61] "kube-controller-manager-embed-certs-973953" [a98f2a3a-5bb2-4b03-8cb5-ca0d4ba3ba15] Running
	I1213 13:45:30.139371  706714 system_pods.go:61] "kube-proxy-jqcpv" [24f1d48c-0a0c-44ad-b091-b6cd3a472231] Running
	I1213 13:45:30.139380  706714 system_pods.go:61] "kube-scheduler-embed-certs-973953" [4efa6435-7ac7-46b0-ba0d-abd549e2182b] Running
	I1213 13:45:30.139387  706714 system_pods.go:61] "storage-provisioner" [17970a2f-3f70-4f46-9d99-c7c806730329] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:45:30.139396  706714 system_pods.go:74] duration metric: took 3.520131ms to wait for pod list to return data ...
	I1213 13:45:30.139429  706714 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:45:30.141861  706714 default_sa.go:45] found service account: "default"
	I1213 13:45:30.141885  706714 default_sa.go:55] duration metric: took 2.448556ms for default service account to be created ...
	I1213 13:45:30.141896  706714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:45:30.144653  706714 system_pods.go:86] 8 kube-system pods found
	I1213 13:45:30.144679  706714 system_pods.go:89] "coredns-66bc5c9577-bl59n" [b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:45:30.144685  706714 system_pods.go:89] "etcd-embed-certs-973953" [a9272b08-1c70-4ee9-9342-8918a71f2072] Running
	I1213 13:45:30.144690  706714 system_pods.go:89] "kindnet-bw5d4" [71bc6690-d0e7-4a63-a26e-fcee1b63c294] Running
	I1213 13:45:30.144695  706714 system_pods.go:89] "kube-apiserver-embed-certs-973953" [0df9cf2a-cf21-4f12-b58c-daf6852a4579] Running
	I1213 13:45:30.144707  706714 system_pods.go:89] "kube-controller-manager-embed-certs-973953" [a98f2a3a-5bb2-4b03-8cb5-ca0d4ba3ba15] Running
	I1213 13:45:30.144711  706714 system_pods.go:89] "kube-proxy-jqcpv" [24f1d48c-0a0c-44ad-b091-b6cd3a472231] Running
	I1213 13:45:30.144715  706714 system_pods.go:89] "kube-scheduler-embed-certs-973953" [4efa6435-7ac7-46b0-ba0d-abd549e2182b] Running
	I1213 13:45:30.144719  706714 system_pods.go:89] "storage-provisioner" [17970a2f-3f70-4f46-9d99-c7c806730329] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:45:30.144752  706714 retry.go:31] will retry after 302.147133ms: missing components: kube-dns
	I1213 13:45:30.451340  706714 system_pods.go:86] 8 kube-system pods found
	I1213 13:45:30.451383  706714 system_pods.go:89] "coredns-66bc5c9577-bl59n" [b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:45:30.451392  706714 system_pods.go:89] "etcd-embed-certs-973953" [a9272b08-1c70-4ee9-9342-8918a71f2072] Running
	I1213 13:45:30.451407  706714 system_pods.go:89] "kindnet-bw5d4" [71bc6690-d0e7-4a63-a26e-fcee1b63c294] Running
	I1213 13:45:30.451414  706714 system_pods.go:89] "kube-apiserver-embed-certs-973953" [0df9cf2a-cf21-4f12-b58c-daf6852a4579] Running
	I1213 13:45:30.451426  706714 system_pods.go:89] "kube-controller-manager-embed-certs-973953" [a98f2a3a-5bb2-4b03-8cb5-ca0d4ba3ba15] Running
	I1213 13:45:30.451435  706714 system_pods.go:89] "kube-proxy-jqcpv" [24f1d48c-0a0c-44ad-b091-b6cd3a472231] Running
	I1213 13:45:30.451441  706714 system_pods.go:89] "kube-scheduler-embed-certs-973953" [4efa6435-7ac7-46b0-ba0d-abd549e2182b] Running
	I1213 13:45:30.451452  706714 system_pods.go:89] "storage-provisioner" [17970a2f-3f70-4f46-9d99-c7c806730329] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:45:30.451475  706714 retry.go:31] will retry after 341.034784ms: missing components: kube-dns
	I1213 13:45:30.797082  706714 system_pods.go:86] 8 kube-system pods found
	I1213 13:45:30.797114  706714 system_pods.go:89] "coredns-66bc5c9577-bl59n" [b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e] Running
	I1213 13:45:30.797122  706714 system_pods.go:89] "etcd-embed-certs-973953" [a9272b08-1c70-4ee9-9342-8918a71f2072] Running
	I1213 13:45:30.797127  706714 system_pods.go:89] "kindnet-bw5d4" [71bc6690-d0e7-4a63-a26e-fcee1b63c294] Running
	I1213 13:45:30.797133  706714 system_pods.go:89] "kube-apiserver-embed-certs-973953" [0df9cf2a-cf21-4f12-b58c-daf6852a4579] Running
	I1213 13:45:30.797139  706714 system_pods.go:89] "kube-controller-manager-embed-certs-973953" [a98f2a3a-5bb2-4b03-8cb5-ca0d4ba3ba15] Running
	I1213 13:45:30.797143  706714 system_pods.go:89] "kube-proxy-jqcpv" [24f1d48c-0a0c-44ad-b091-b6cd3a472231] Running
	I1213 13:45:30.797149  706714 system_pods.go:89] "kube-scheduler-embed-certs-973953" [4efa6435-7ac7-46b0-ba0d-abd549e2182b] Running
	I1213 13:45:30.797154  706714 system_pods.go:89] "storage-provisioner" [17970a2f-3f70-4f46-9d99-c7c806730329] Running
	I1213 13:45:30.797172  706714 system_pods.go:126] duration metric: took 655.26123ms to wait for k8s-apps to be running ...
	I1213 13:45:30.797186  706714 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:45:30.797234  706714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:45:30.809843  706714 system_svc.go:56] duration metric: took 12.648696ms WaitForService to wait for kubelet
	I1213 13:45:30.809869  706714 kubeadm.go:587] duration metric: took 11.549456193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:45:30.809901  706714 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:45:30.812897  706714 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:45:30.812921  706714 node_conditions.go:123] node cpu capacity is 8
	I1213 13:45:30.812937  706714 node_conditions.go:105] duration metric: took 3.032364ms to run NodePressure ...
	I1213 13:45:30.812949  706714 start.go:242] waiting for startup goroutines ...
	I1213 13:45:30.812956  706714 start.go:247] waiting for cluster config update ...
	I1213 13:45:30.812966  706714 start.go:256] writing updated cluster config ...
	I1213 13:45:30.813203  706714 ssh_runner.go:195] Run: rm -f paused
	I1213 13:45:30.816943  706714 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:45:30.820051  706714 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:30.823853  706714 pod_ready.go:94] pod "coredns-66bc5c9577-bl59n" is "Ready"
	I1213 13:45:30.823871  706714 pod_ready.go:86] duration metric: took 3.795792ms for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:30.825627  706714 pod_ready.go:83] waiting for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:30.829127  706714 pod_ready.go:94] pod "etcd-embed-certs-973953" is "Ready"
	I1213 13:45:30.829158  706714 pod_ready.go:86] duration metric: took 3.513188ms for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:30.830949  706714 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:30.834447  706714 pod_ready.go:94] pod "kube-apiserver-embed-certs-973953" is "Ready"
	I1213 13:45:30.834466  706714 pod_ready.go:86] duration metric: took 3.498441ms for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:30.836388  706714 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:31.222278  706714 pod_ready.go:94] pod "kube-controller-manager-embed-certs-973953" is "Ready"
	I1213 13:45:31.222312  706714 pod_ready.go:86] duration metric: took 385.904294ms for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:31.421432  706714 pod_ready.go:83] waiting for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:31.821636  706714 pod_ready.go:94] pod "kube-proxy-jqcpv" is "Ready"
	I1213 13:45:31.821670  706714 pod_ready.go:86] duration metric: took 400.20412ms for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:32.021480  706714 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:32.421654  706714 pod_ready.go:94] pod "kube-scheduler-embed-certs-973953" is "Ready"
	I1213 13:45:32.421678  706714 pod_ready.go:86] duration metric: took 400.167244ms for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:45:32.421691  706714 pod_ready.go:40] duration metric: took 1.604726575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:45:32.471422  706714 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:45:32.473299  706714 out.go:179] * Done! kubectl is now configured to use "embed-certs-973953" cluster and "default" namespace by default
	I1213 13:45:31.459734  716415 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-038239 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:45:31.476803  716415 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 13:45:31.480882  716415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:45:31.491279  716415 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-038239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:45:31.491420  716415 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:45:31.491466  716415 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:45:31.523198  716415 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:45:31.523220  716415 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:45:31.523268  716415 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:45:31.549557  716415 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:45:31.549578  716415 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:45:31.549587  716415 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1213 13:45:31.549688  716415 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-038239 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:45:31.549751  716415 ssh_runner.go:195] Run: crio config
	I1213 13:45:31.596239  716415 cni.go:84] Creating CNI manager for ""
	I1213 13:45:31.596263  716415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:45:31.596279  716415 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:45:31.596302  716415 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-038239 NodeName:default-k8s-diff-port-038239 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:45:31.596412  716415 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-038239"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:45:31.596473  716415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:45:31.604752  716415 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:45:31.604833  716415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:45:31.612547  716415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 13:45:31.626118  716415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:45:31.640473  716415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1213 13:45:31.653180  716415 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:45:31.657019  716415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:45:31.667557  716415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:31.760670  716415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:45:31.779537  716415 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239 for IP: 192.168.94.2
	I1213 13:45:31.779563  716415 certs.go:195] generating shared ca certs ...
	I1213 13:45:31.779588  716415 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.779770  716415 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:45:31.779859  716415 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:45:31.779874  716415 certs.go:257] generating profile certs ...
	I1213 13:45:31.779940  716415 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/client.key
	I1213 13:45:31.779960  716415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/client.crt with IP's: []
	I1213 13:45:31.832069  716415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/client.crt ...
	I1213 13:45:31.832099  716415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/client.crt: {Name:mk47511663ddb219adad809bfce200c58587a1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.832261  716415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/client.key ...
	I1213 13:45:31.832279  716415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/client.key: {Name:mk04b36992de33b900bdca3d7e51d9cc0e05e638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.832387  716415 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.key.e680d21c
	I1213 13:45:31.832407  716415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.crt.e680d21c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 13:45:31.856009  716415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.crt.e680d21c ...
	I1213 13:45:31.856033  716415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.crt.e680d21c: {Name:mk381d6b1b192dd529f7a192d3551eece40f0e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.856163  716415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.key.e680d21c ...
	I1213 13:45:31.856176  716415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.key.e680d21c: {Name:mke0ab365ab4a3510c82fb56bf1a25738583adfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.856246  716415 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.crt.e680d21c -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.crt
	I1213 13:45:31.856313  716415 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.key.e680d21c -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.key
	I1213 13:45:31.856370  716415 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.key
	I1213 13:45:31.856384  716415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.crt with IP's: []
	I1213 13:45:31.946161  716415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.crt ...
	I1213 13:45:31.946200  716415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.crt: {Name:mkc8c87b0a6564a7f572578650ba72b3b896a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.946394  716415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.key ...
	I1213 13:45:31.946420  716415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.key: {Name:mk746f6f7dbbfc3bd0e4e22016f6693f7640c95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:31.946639  716415 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:45:31.946681  716415 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:45:31.946688  716415 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:45:31.946711  716415 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:45:31.946737  716415 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:45:31.946761  716415 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:45:31.946833  716415 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:45:31.947428  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:45:31.967075  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:45:31.985850  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:45:32.003273  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:45:32.021600  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 13:45:32.039049  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 13:45:32.055363  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:45:32.071783  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:45:32.088547  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:45:32.107037  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:45:32.124253  716415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:45:32.140606  716415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:45:32.152439  716415 ssh_runner.go:195] Run: openssl version
	I1213 13:45:32.158127  716415 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:32.165034  716415 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:45:32.171916  716415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:32.175349  716415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:32.175392  716415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:32.209510  716415 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:45:32.217163  716415 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:45:32.224702  716415 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:45:32.231446  716415 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:45:32.238378  716415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:45:32.241871  716415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:45:32.241928  716415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:45:32.275242  716415 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:45:32.283078  716415 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:45:32.290405  716415 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:45:32.297560  716415 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:45:32.304899  716415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:45:32.308386  716415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:45:32.308435  716415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:45:32.345214  716415 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:45:32.354413  716415 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:45:32.362059  716415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:45:32.365748  716415 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:45:32.365839  716415 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-038239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:32.365941  716415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:45:32.366021  716415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:45:32.393226  716415 cri.go:89] found id: ""
	I1213 13:45:32.393291  716415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:45:32.402605  716415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:45:32.411096  716415 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:45:32.411189  716415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:45:32.419001  716415 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:45:32.419020  716415 kubeadm.go:158] found existing configuration files:
	
	I1213 13:45:32.419062  716415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1213 13:45:32.427555  716415 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:45:32.427611  716415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:45:32.436115  716415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1213 13:45:32.443973  716415 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:45:32.444029  716415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:45:32.452264  716415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1213 13:45:32.461013  716415 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:45:32.461058  716415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:45:32.469250  716415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1213 13:45:32.477579  716415 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:45:32.477628  716415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:45:32.484937  716415 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:45:32.531175  716415 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 13:45:32.531250  716415 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:45:32.575411  716415 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:45:32.575473  716415 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:45:32.575502  716415 kubeadm.go:319] OS: Linux
	I1213 13:45:32.575577  716415 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:45:32.575653  716415 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:45:32.575711  716415 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:45:32.575768  716415 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:45:32.575847  716415 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:45:32.575904  716415 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:45:32.575961  716415 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:45:32.576015  716415 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:45:32.650103  716415 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:45:32.650419  716415 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:45:32.650586  716415 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:45:32.659073  716415 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:45:32.662017  716415 out.go:252]   - Generating certificates and keys ...
	I1213 13:45:32.662132  716415 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:45:32.662230  716415 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:45:32.865230  716415 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:45:32.942946  716415 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:45:33.087890  716415 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:45:28.968198  717532 out.go:252] * Restarting existing docker container for "old-k8s-version-417583" ...
	I1213 13:45:28.968397  717532 cli_runner.go:164] Run: docker start old-k8s-version-417583
	I1213 13:45:29.245312  717532 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:45:29.266095  717532 kic.go:430] container "old-k8s-version-417583" state is running.
	I1213 13:45:29.266587  717532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-417583
	I1213 13:45:29.288190  717532 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/config.json ...
	I1213 13:45:29.288485  717532 machine.go:94] provisionDockerMachine start ...
	I1213 13:45:29.288571  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:29.308944  717532 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:29.309281  717532 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1213 13:45:29.309300  717532 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:45:29.309957  717532 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33912->127.0.0.1:33493: read: connection reset by peer
	I1213 13:45:32.447005  717532 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-417583
	
	I1213 13:45:32.447033  717532 ubuntu.go:182] provisioning hostname "old-k8s-version-417583"
	I1213 13:45:32.447102  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:32.467323  717532 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:32.467535  717532 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1213 13:45:32.467548  717532 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-417583 && echo "old-k8s-version-417583" | sudo tee /etc/hostname
	I1213 13:45:32.622836  717532 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-417583
	
	I1213 13:45:32.622940  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:32.646392  717532 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:32.646721  717532 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1213 13:45:32.646751  717532 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-417583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-417583/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-417583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:45:32.782373  717532 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:45:32.782403  717532 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:45:32.782449  717532 ubuntu.go:190] setting up certificates
	I1213 13:45:32.782462  717532 provision.go:84] configureAuth start
	I1213 13:45:32.782529  717532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-417583
	I1213 13:45:32.800392  717532 provision.go:143] copyHostCerts
	I1213 13:45:32.800467  717532 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:45:32.800487  717532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:45:32.800569  717532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:45:32.800730  717532 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:45:32.800747  717532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:45:32.800809  717532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:45:32.800925  717532 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:45:32.800938  717532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:45:32.800975  717532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:45:32.801066  717532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-417583 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-417583]
	I1213 13:45:32.884283  717532 provision.go:177] copyRemoteCerts
	I1213 13:45:32.884348  717532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:45:32.884398  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:32.903138  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:33.002819  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:45:33.020339  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 13:45:33.037214  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:45:33.055052  717532 provision.go:87] duration metric: took 272.568469ms to configureAuth
	I1213 13:45:33.055078  717532 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:45:33.055267  717532 config.go:182] Loaded profile config "old-k8s-version-417583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 13:45:33.055396  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:33.072983  717532 main.go:143] libmachine: Using SSH client type: native
	I1213 13:45:33.073209  717532 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1213 13:45:33.073239  717532 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:45:33.388341  717532 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:45:33.388379  717532 machine.go:97] duration metric: took 4.099874977s to provisionDockerMachine
	I1213 13:45:33.388396  717532 start.go:293] postStartSetup for "old-k8s-version-417583" (driver="docker")
	I1213 13:45:33.388411  717532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:45:33.388491  717532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:45:33.388540  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:33.410022  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:33.514291  717532 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:45:33.518191  717532 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:45:33.518216  717532 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:45:33.518226  717532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:45:33.518275  717532 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:45:33.518352  717532 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:45:33.518430  717532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:45:33.528029  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:45:33.545530  717532 start.go:296] duration metric: took 157.116428ms for postStartSetup
	I1213 13:45:33.545608  717532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:45:33.545661  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:33.564868  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:33.665104  717532 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:45:33.669751  717532 fix.go:56] duration metric: took 4.726512078s for fixHost
	I1213 13:45:33.669785  717532 start.go:83] releasing machines lock for "old-k8s-version-417583", held for 4.726562885s
	I1213 13:45:33.669867  717532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-417583
	I1213 13:45:33.687936  717532 ssh_runner.go:195] Run: cat /version.json
	I1213 13:45:33.687981  717532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:45:33.688035  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:33.687985  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:33.708137  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:33.708553  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:33.800273  717532 ssh_runner.go:195] Run: systemctl --version
	I1213 13:45:33.854263  717532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:45:33.890088  717532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:45:33.894863  717532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:45:33.894937  717532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:45:33.902660  717532 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:45:33.902678  717532 start.go:496] detecting cgroup driver to use...
	I1213 13:45:33.902705  717532 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:45:33.902737  717532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:45:33.916549  717532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:45:33.928461  717532 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:45:33.928518  717532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:45:33.943335  717532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:45:33.955476  717532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:45:34.032494  717532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:45:34.113205  717532 docker.go:234] disabling docker service ...
	I1213 13:45:34.113271  717532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:45:34.127656  717532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:45:34.140068  717532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:45:34.227002  717532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:45:34.309312  717532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:45:34.321957  717532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:45:34.336479  717532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 13:45:34.336554  717532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.346148  717532 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:45:34.346207  717532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.355128  717532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.364214  717532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.372644  717532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:45:34.380531  717532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.389095  717532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.397219  717532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:45:34.405489  717532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:45:34.412736  717532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:45:34.419831  717532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:34.498411  717532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:45:34.640491  717532 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:45:34.640564  717532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:45:34.644579  717532 start.go:564] Will wait 60s for crictl version
	I1213 13:45:34.644635  717532 ssh_runner.go:195] Run: which crictl
	I1213 13:45:34.648076  717532 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:45:34.675686  717532 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:45:34.675832  717532 ssh_runner.go:195] Run: crio --version
	I1213 13:45:34.703910  717532 ssh_runner.go:195] Run: crio --version
	I1213 13:45:34.735099  717532 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1213 13:45:34.736315  717532 cli_runner.go:164] Run: docker network inspect old-k8s-version-417583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:45:34.753616  717532 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:45:34.757632  717532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:45:34.767598  717532 kubeadm.go:884] updating cluster {Name:old-k8s-version-417583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:45:34.767712  717532 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 13:45:34.767763  717532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:45:34.798859  717532 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:45:34.798882  717532 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:45:34.798935  717532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:45:34.825692  717532 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:45:34.825716  717532 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:45:34.825723  717532 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1213 13:45:34.825914  717532 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-417583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:45:34.825988  717532 ssh_runner.go:195] Run: crio config
	I1213 13:45:34.875406  717532 cni.go:84] Creating CNI manager for ""
	I1213 13:45:34.875427  717532 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:45:34.875444  717532 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:45:34.875477  717532 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-417583 NodeName:old-k8s-version-417583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:45:34.875647  717532 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-417583"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:45:34.875748  717532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1213 13:45:34.885383  717532 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:45:34.885440  717532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:45:34.893904  717532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 13:45:34.907866  717532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:45:34.921486  717532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1213 13:45:34.934565  717532 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:45:34.938288  717532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:45:34.948382  717532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:35.026882  717532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:45:35.060016  717532 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583 for IP: 192.168.76.2
	I1213 13:45:35.060040  717532 certs.go:195] generating shared ca certs ...
	I1213 13:45:35.060061  717532 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:35.060232  717532 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:45:35.060294  717532 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:45:35.060308  717532 certs.go:257] generating profile certs ...
	I1213 13:45:35.060435  717532 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/client.key
	I1213 13:45:35.060535  717532 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/apiserver.key.0b1a8811
	I1213 13:45:35.060606  717532 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/proxy-client.key
	I1213 13:45:35.060710  717532 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:45:35.060744  717532 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:45:35.060754  717532 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:45:35.060792  717532 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:45:35.060825  717532 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:45:35.060855  717532 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:45:35.060907  717532 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:45:35.061514  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:45:35.082489  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:45:35.101242  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:45:35.119082  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:45:35.138713  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 13:45:35.164307  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 13:45:35.181425  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:45:35.198236  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/old-k8s-version-417583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 13:45:35.214931  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:45:35.231288  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:45:35.248353  717532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:45:35.265968  717532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:45:35.277928  717532 ssh_runner.go:195] Run: openssl version
	I1213 13:45:35.283843  717532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:35.290994  717532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:45:35.298572  717532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:35.302316  717532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:35.302367  717532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:45:35.337547  717532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:45:35.345720  717532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:45:35.353152  717532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:45:35.360605  717532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:45:35.364313  717532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:45:35.364376  717532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:45:35.399086  717532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:45:35.406970  717532 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:45:35.414145  717532 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:45:35.421758  717532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:45:35.425389  717532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:45:35.425437  717532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:45:35.462710  717532 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:45:35.470227  717532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:45:35.473808  717532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:45:35.507750  717532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:45:35.541793  717532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:45:35.581824  717532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:45:35.629952  717532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:45:35.689722  717532 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:45:35.745364  717532 kubeadm.go:401] StartCluster: {Name:old-k8s-version-417583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-417583 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:45:35.745543  717532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:45:35.745629  717532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:45:35.782041  717532 cri.go:89] found id: "682fe66dfbdf3e1e235c5a788a0304e2256519646f7b610b234ee76910a815c4"
	I1213 13:45:35.782064  717532 cri.go:89] found id: "50199bb0f2355e999cd87d325a8063909be474aea9edf7a8e719fb56e8183d8d"
	I1213 13:45:35.782069  717532 cri.go:89] found id: "8da5fd67633a606e436724c6c76834926bff7b7f1601133a881869ee1a6ef0e1"
	I1213 13:45:35.782074  717532 cri.go:89] found id: "2f447f41ac211953c99934f154aa22a56bee7630e2c5ef5666482cf2393ce32c"
	I1213 13:45:35.782079  717532 cri.go:89] found id: ""
	I1213 13:45:35.782136  717532 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:45:35.796373  717532 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:45:35Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:45:35.796443  717532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:45:35.805641  717532 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:45:35.805658  717532 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:45:35.805700  717532 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:45:35.814171  717532 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:45:35.815107  717532 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-417583" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:45:35.815674  717532 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-390571/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-417583" cluster setting kubeconfig missing "old-k8s-version-417583" context setting]
	I1213 13:45:35.816527  717532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:35.818363  717532 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:45:35.827062  717532 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 13:45:35.827094  717532 kubeadm.go:602] duration metric: took 21.429487ms to restartPrimaryControlPlane
	I1213 13:45:35.827104  717532 kubeadm.go:403] duration metric: took 81.752696ms to StartCluster
	I1213 13:45:35.827121  717532 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:35.827178  717532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:45:35.829171  717532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:45:35.829419  717532 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:45:35.829631  717532 config.go:182] Loaded profile config "old-k8s-version-417583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 13:45:35.829685  717532 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:45:35.829770  717532 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-417583"
	I1213 13:45:35.829801  717532 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-417583"
	W1213 13:45:35.829809  717532 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:45:35.829834  717532 host.go:66] Checking if "old-k8s-version-417583" exists ...
	I1213 13:45:35.829948  717532 addons.go:70] Setting dashboard=true in profile "old-k8s-version-417583"
	I1213 13:45:35.829987  717532 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-417583"
	I1213 13:45:35.829988  717532 addons.go:239] Setting addon dashboard=true in "old-k8s-version-417583"
	W1213 13:45:35.830001  717532 addons.go:248] addon dashboard should already be in state true
	I1213 13:45:35.830004  717532 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-417583"
	I1213 13:45:35.830029  717532 host.go:66] Checking if "old-k8s-version-417583" exists ...
	I1213 13:45:35.830322  717532 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:45:35.830337  717532 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:45:35.830613  717532 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:45:35.831359  717532 out.go:179] * Verifying Kubernetes components...
	I1213 13:45:35.832896  717532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:45:35.858925  717532 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-417583"
	W1213 13:45:35.859012  717532 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:45:35.859056  717532 host.go:66] Checking if "old-k8s-version-417583" exists ...
	I1213 13:45:35.859676  717532 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:45:35.860750  717532 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:45:35.862166  717532 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:45:35.862190  717532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:45:35.862245  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:35.864256  717532 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 13:45:35.865526  717532 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 13:45:33.867453  716415 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:45:34.185527  716415 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:45:34.185728  716415 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-038239 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 13:45:34.308175  716415 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:45:34.308397  716415 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-038239 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 13:45:35.103092  716415 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:45:35.686948  716415 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:45:36.046506  716415 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:45:36.049947  716415 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:45:36.219142  716415 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:45:36.460984  716415 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:45:36.706930  716415 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:45:37.348886  716415 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:45:37.456289  716415 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:45:37.456918  716415 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:45:37.461439  716415 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:45:37.462905  716415 out.go:252]   - Booting up control plane ...
	I1213 13:45:37.463030  716415 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:45:37.463144  716415 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:45:37.464189  716415 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:45:37.479922  716415 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:45:37.480047  716415 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:45:37.486968  716415 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:45:37.487320  716415 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:45:37.487387  716415 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:45:37.605061  716415 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:45:37.605207  716415 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 13:45:38.114249  716415 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 507.89298ms
	I1213 13:45:38.119704  716415 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:45:38.119840  716415 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1213 13:45:38.119972  716415 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:45:38.120125  716415 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:45:35.867073  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:45:35.867093  717532 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:45:35.867145  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:35.905677  717532 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:45:35.905711  717532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:45:35.905980  717532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:45:35.908925  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:35.910659  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:35.937304  717532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:45:36.010213  717532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:45:36.030962  717532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:45:36.031667  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:45:36.031690  717532 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:45:36.036976  717532 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-417583" to be "Ready" ...
	I1213 13:45:36.055359  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:45:36.055381  717532 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:45:36.060358  717532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:45:36.075091  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:45:36.075113  717532 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:45:36.099298  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:45:36.099319  717532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:45:36.119153  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:45:36.119182  717532 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:45:36.143193  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:45:36.143232  717532 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:45:36.168842  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:45:36.168881  717532 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:45:36.188013  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:45:36.188041  717532 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:45:36.205122  717532 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:45:36.205144  717532 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:45:36.226190  717532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:45:38.232348  717532 node_ready.go:49] node "old-k8s-version-417583" is "Ready"
	I1213 13:45:38.232388  717532 node_ready.go:38] duration metric: took 2.195374878s for node "old-k8s-version-417583" to be "Ready" ...
	I1213 13:45:38.232406  717532 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:45:38.232464  717532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:45:38.924333  717532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.893330928s)
	I1213 13:45:38.924432  717532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.864050514s)
	I1213 13:45:39.339661  717532 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.107172794s)
	I1213 13:45:39.339712  717532 api_server.go:72] duration metric: took 3.510262226s to wait for apiserver process to appear ...
	I1213 13:45:39.339720  717532 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:45:39.339743  717532 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:45:39.340256  717532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.114008839s)
	I1213 13:45:39.342391  717532 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-417583 addons enable metrics-server
	
	I1213 13:45:39.343828  717532 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Dec 13 13:45:30 embed-certs-973953 crio[785]: time="2025-12-13T13:45:30.291213582Z" level=info msg="Starting container: 897be561b9000934cce059cdbb3e2c753f50beee31b7361a9aa4fb09527acdd6" id=24c67a5a-a77e-4aa9-9e45-93031fca9950 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:30 embed-certs-973953 crio[785]: time="2025-12-13T13:45:30.293348275Z" level=info msg="Started container" PID=1855 containerID=897be561b9000934cce059cdbb3e2c753f50beee31b7361a9aa4fb09527acdd6 description=kube-system/coredns-66bc5c9577-bl59n/coredns id=24c67a5a-a77e-4aa9-9e45-93031fca9950 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9458b1c16d4f3a1e7d4edfaf6510dca3753530ab77a059e42bab2bbd19e9f155
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.933647774Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0d0856eb-686d-4495-804d-4af4aa9a26d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.933718712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.938593747Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0783cf88f7a07b01ec491921d3525b7a77a7f37cd103c502dd2ef53b0e90ab59 UID:c0f02bd5-9f45-405f-81f1-b1df3e55d90d NetNS:/var/run/netns/6d6e55e9-3f6b-45f2-a751-e6a801926970 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000382638}] Aliases:map[]}"
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.938622587Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.948235085Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0783cf88f7a07b01ec491921d3525b7a77a7f37cd103c502dd2ef53b0e90ab59 UID:c0f02bd5-9f45-405f-81f1-b1df3e55d90d NetNS:/var/run/netns/6d6e55e9-3f6b-45f2-a751-e6a801926970 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000382638}] Aliases:map[]}"
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.948347063Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.949102749Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.949873536Z" level=info msg="Ran pod sandbox 0783cf88f7a07b01ec491921d3525b7a77a7f37cd103c502dd2ef53b0e90ab59 with infra container: default/busybox/POD" id=0d0856eb-686d-4495-804d-4af4aa9a26d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.951052765Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e88e1a8-ae06-42bb-ad2b-b8544848a865 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.951192805Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1e88e1a8-ae06-42bb-ad2b-b8544848a865 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.951230583Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1e88e1a8-ae06-42bb-ad2b-b8544848a865 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.951996794Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56e1f6a4-09c7-410c-afcc-e4e9ff72095b name=/runtime.v1.ImageService/PullImage
	Dec 13 13:45:32 embed-certs-973953 crio[785]: time="2025-12-13T13:45:32.954853987Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.57105773Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=56e1f6a4-09c7-410c-afcc-e4e9ff72095b name=/runtime.v1.ImageService/PullImage
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.571824736Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c4419abd-a2e9-47c9-bfb8-8aabf788f748 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.573118733Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8d5badcd-3c93-4ee3-9519-ba93f610018f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.576300672Z" level=info msg="Creating container: default/busybox/busybox" id=7fa3d0fc-9872-4a5c-8fcb-9164949a8f68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.576469578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.581637209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.582083199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.606668729Z" level=info msg="Created container b783aba09b51fb024a53613d698b240b1f4a34339467e66140a9dbe8f6303909: default/busybox/busybox" id=7fa3d0fc-9872-4a5c-8fcb-9164949a8f68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.607167594Z" level=info msg="Starting container: b783aba09b51fb024a53613d698b240b1f4a34339467e66140a9dbe8f6303909" id=02616c0d-4eda-4630-8d26-5865945b7fb6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:33 embed-certs-973953 crio[785]: time="2025-12-13T13:45:33.609001254Z" level=info msg="Started container" PID=1932 containerID=b783aba09b51fb024a53613d698b240b1f4a34339467e66140a9dbe8f6303909 description=default/busybox/busybox id=02616c0d-4eda-4630-8d26-5865945b7fb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0783cf88f7a07b01ec491921d3525b7a77a7f37cd103c502dd2ef53b0e90ab59
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	b783aba09b51f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   0783cf88f7a07       busybox                                      default
	897be561b9000       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   9458b1c16d4f3       coredns-66bc5c9577-bl59n                     kube-system
	cad06630c539b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   29f9c4c62f5d5       storage-provisioner                          kube-system
	aa38821132db6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   8bfbcfeee7a82       kindnet-bw5d4                                kube-system
	03be25f5e72e6       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   91ec9e84e8009       kube-proxy-jqcpv                             kube-system
	cb013390c74d9       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      32 seconds ago      Running             kube-controller-manager   0                   40b814bc748da       kube-controller-manager-embed-certs-973953   kube-system
	313b0c7ea97c3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      32 seconds ago      Running             etcd                      0                   a51c20c1110d3       etcd-embed-certs-973953                      kube-system
	eaff923689b7a       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      32 seconds ago      Running             kube-scheduler            0                   33a1854e70752       kube-scheduler-embed-certs-973953            kube-system
	a2a46d7379943       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      32 seconds ago      Running             kube-apiserver            0                   109abe0d6c9fe       kube-apiserver-embed-certs-973953            kube-system
	
	
	==> coredns [897be561b9000934cce059cdbb3e2c753f50beee31b7361a9aa4fb09527acdd6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58636 - 29753 "HINFO IN 6679013509748154486.5675413234454777722. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069856479s
	
	
	==> describe nodes <==
	Name:               embed-certs-973953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-973953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=embed-certs-973953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_45_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-973953
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:45:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:45:29 +0000   Sat, 13 Dec 2025 13:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-973953
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                03ac64dc-35d6-4a73-b891-f77762e89392
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-bl59n                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-embed-certs-973953                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-bw5d4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-973953             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-embed-certs-973953    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-jqcpv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-973953             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node embed-certs-973953 event: Registered Node embed-certs-973953 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-973953 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [313b0c7ea97c3a417638c16879b346e45c0f1ce2e4e1241376f2546c8f13663b] <==
	{"level":"warn","ts":"2025-12-13T13:45:10.161293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.171934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.179677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.186357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.195330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.203082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.210335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.218447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.227212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.236920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.243874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.251111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.258153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.266139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.274165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.282982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.291299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.300077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.309931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.317981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.332024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.336024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.343961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.350748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:10.410854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38878","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:45:41 up  2:28,  0 user,  load average: 6.81, 4.19, 2.58
	Linux embed-certs-973953 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aa38821132db61de63e42be9359d8326cac10571b523e66821a29122d9c37410] <==
	I1213 13:45:19.412414       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:19.412787       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 13:45:19.476499       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:19.476555       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:19.476588       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:19.714771       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:19.714887       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:19.714900       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:19.715046       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:20.015846       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:20.015870       1 metrics.go:72] Registering metrics
	I1213 13:45:20.015916       1 controller.go:711] "Syncing nftables rules"
	I1213 13:45:29.715933       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:45:29.715987       1 main.go:301] handling current node
	I1213 13:45:39.720369       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:45:39.720413       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2a46d737994306f31718baf5c72e9beb5e17fd6d316a6cd57f1072f60aa8b4f] <==
	E1213 13:45:11.020375       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1213 13:45:11.060863       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:45:11.063302       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 13:45:11.063333       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:11.072841       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:11.073488       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:45:11.172162       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:45:11.869992       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 13:45:11.882674       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:45:11.882695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:45:12.388722       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:45:12.441687       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:45:12.566702       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:45:12.572655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1213 13:45:12.573711       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:45:12.578386       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:45:12.898639       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:45:13.715283       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:45:13.725510       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:45:13.733404       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:45:18.704325       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:18.708591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:18.753226       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:45:18.801086       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1213 13:45:39.753532       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:34928: use of closed network connection
	
	
	==> kube-controller-manager [cb013390c74d94933c01782b2623ff3993820d8ba6bcad78db1ab5511587f1da] <==
	I1213 13:45:17.855164       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:45:17.868327       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 13:45:17.889644       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:45:17.895968       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 13:45:17.897357       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 13:45:17.897440       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:45:17.897463       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:45:17.897479       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:45:17.897771       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:45:17.897803       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 13:45:17.897956       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 13:45:17.898836       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:45:17.898932       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:45:17.898986       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 13:45:17.898998       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 13:45:17.899028       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 13:45:17.899307       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 13:45:17.899524       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 13:45:17.899581       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:45:17.899739       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 13:45:17.901675       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 13:45:17.902864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 13:45:17.902910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:45:17.926728       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:45:32.830697       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [03be25f5e72e68cc07ec21d4c268f5e0d2d75cd4865098aeb2e8b29b36acb6bd] <==
	I1213 13:45:19.221703       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:45:19.293619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:45:19.394856       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:45:19.394903       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 13:45:19.395022       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:45:19.422716       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:19.422857       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:45:19.445895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:45:19.448919       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:45:19.449315       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:19.452192       1 config.go:200] "Starting service config controller"
	I1213 13:45:19.454507       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:45:19.452566       1 config.go:309] "Starting node config controller"
	I1213 13:45:19.454563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:45:19.454571       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:45:19.452654       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:45:19.454601       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:45:19.452633       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:45:19.454614       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:45:19.556324       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:45:19.556365       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:45:19.556395       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eaff923689b7a4af3cef1842aaaa1075492a24698d44d6138c2fac5e42641b4a] <==
	E1213 13:45:10.931072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:45:10.931125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:45:10.931131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:45:10.931206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:45:10.931266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:45:10.931276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:45:10.931347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:45:10.931441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:45:10.931454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:45:10.931465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:45:10.931610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:45:10.931753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:45:11.738027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:45:11.745813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:45:11.791405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:45:11.865313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:45:11.898156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:45:11.906622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:45:11.931927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:45:11.973122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:45:12.044621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:45:12.094857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:45:12.110009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:45:12.145704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1213 13:45:14.626590       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:45:14 embed-certs-973953 kubelet[1322]: I1213 13:45:14.613530    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-973953" podStartSLOduration=1.6134922619999998 podStartE2EDuration="1.613492262s" podCreationTimestamp="2025-12-13 13:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:14.613472842 +0000 UTC m=+1.142802241" watchObservedRunningTime="2025-12-13 13:45:14.613492262 +0000 UTC m=+1.142821683"
	Dec 13 13:45:14 embed-certs-973953 kubelet[1322]: I1213 13:45:14.626014    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-973953" podStartSLOduration=1.6259931 podStartE2EDuration="1.6259931s" podCreationTimestamp="2025-12-13 13:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:14.625959264 +0000 UTC m=+1.155288662" watchObservedRunningTime="2025-12-13 13:45:14.6259931 +0000 UTC m=+1.155322498"
	Dec 13 13:45:14 embed-certs-973953 kubelet[1322]: I1213 13:45:14.663385    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-973953" podStartSLOduration=1.663360891 podStartE2EDuration="1.663360891s" podCreationTimestamp="2025-12-13 13:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:14.635860709 +0000 UTC m=+1.165190125" watchObservedRunningTime="2025-12-13 13:45:14.663360891 +0000 UTC m=+1.192690288"
	Dec 13 13:45:14 embed-certs-973953 kubelet[1322]: I1213 13:45:14.663553    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-973953" podStartSLOduration=1.6635421369999999 podStartE2EDuration="1.663542137s" podCreationTimestamp="2025-12-13 13:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:14.662398258 +0000 UTC m=+1.191727636" watchObservedRunningTime="2025-12-13 13:45:14.663542137 +0000 UTC m=+1.192871536"
	Dec 13 13:45:17 embed-certs-973953 kubelet[1322]: I1213 13:45:17.892843    1322 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 13:45:17 embed-certs-973953 kubelet[1322]: I1213 13:45:17.893595    1322 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896059    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxg4\" (UniqueName: \"kubernetes.io/projected/71bc6690-d0e7-4a63-a26e-fcee1b63c294-kube-api-access-7rxg4\") pod \"kindnet-bw5d4\" (UID: \"71bc6690-d0e7-4a63-a26e-fcee1b63c294\") " pod="kube-system/kindnet-bw5d4"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896139    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24f1d48c-0a0c-44ad-b091-b6cd3a472231-kube-proxy\") pod \"kube-proxy-jqcpv\" (UID: \"24f1d48c-0a0c-44ad-b091-b6cd3a472231\") " pod="kube-system/kube-proxy-jqcpv"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896183    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24f1d48c-0a0c-44ad-b091-b6cd3a472231-xtables-lock\") pod \"kube-proxy-jqcpv\" (UID: \"24f1d48c-0a0c-44ad-b091-b6cd3a472231\") " pod="kube-system/kube-proxy-jqcpv"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896205    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f1d48c-0a0c-44ad-b091-b6cd3a472231-lib-modules\") pod \"kube-proxy-jqcpv\" (UID: \"24f1d48c-0a0c-44ad-b091-b6cd3a472231\") " pod="kube-system/kube-proxy-jqcpv"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896228    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rskw4\" (UniqueName: \"kubernetes.io/projected/24f1d48c-0a0c-44ad-b091-b6cd3a472231-kube-api-access-rskw4\") pod \"kube-proxy-jqcpv\" (UID: \"24f1d48c-0a0c-44ad-b091-b6cd3a472231\") " pod="kube-system/kube-proxy-jqcpv"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896255    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/71bc6690-d0e7-4a63-a26e-fcee1b63c294-cni-cfg\") pod \"kindnet-bw5d4\" (UID: \"71bc6690-d0e7-4a63-a26e-fcee1b63c294\") " pod="kube-system/kindnet-bw5d4"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896307    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71bc6690-d0e7-4a63-a26e-fcee1b63c294-xtables-lock\") pod \"kindnet-bw5d4\" (UID: \"71bc6690-d0e7-4a63-a26e-fcee1b63c294\") " pod="kube-system/kindnet-bw5d4"
	Dec 13 13:45:18 embed-certs-973953 kubelet[1322]: I1213 13:45:18.896355    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71bc6690-d0e7-4a63-a26e-fcee1b63c294-lib-modules\") pod \"kindnet-bw5d4\" (UID: \"71bc6690-d0e7-4a63-a26e-fcee1b63c294\") " pod="kube-system/kindnet-bw5d4"
	Dec 13 13:45:19 embed-certs-973953 kubelet[1322]: I1213 13:45:19.632408    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqcpv" podStartSLOduration=1.632383222 podStartE2EDuration="1.632383222s" podCreationTimestamp="2025-12-13 13:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:19.632261962 +0000 UTC m=+6.161591359" watchObservedRunningTime="2025-12-13 13:45:19.632383222 +0000 UTC m=+6.161712621"
	Dec 13 13:45:19 embed-certs-973953 kubelet[1322]: I1213 13:45:19.651107    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bw5d4" podStartSLOduration=1.651084233 podStartE2EDuration="1.651084233s" podCreationTimestamp="2025-12-13 13:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:19.649655606 +0000 UTC m=+6.178985032" watchObservedRunningTime="2025-12-13 13:45:19.651084233 +0000 UTC m=+6.180413631"
	Dec 13 13:45:29 embed-certs-973953 kubelet[1322]: I1213 13:45:29.907368    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 13:45:29 embed-certs-973953 kubelet[1322]: I1213 13:45:29.979498    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25nks\" (UniqueName: \"kubernetes.io/projected/17970a2f-3f70-4f46-9d99-c7c806730329-kube-api-access-25nks\") pod \"storage-provisioner\" (UID: \"17970a2f-3f70-4f46-9d99-c7c806730329\") " pod="kube-system/storage-provisioner"
	Dec 13 13:45:29 embed-certs-973953 kubelet[1322]: I1213 13:45:29.979558    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/17970a2f-3f70-4f46-9d99-c7c806730329-tmp\") pod \"storage-provisioner\" (UID: \"17970a2f-3f70-4f46-9d99-c7c806730329\") " pod="kube-system/storage-provisioner"
	Dec 13 13:45:29 embed-certs-973953 kubelet[1322]: I1213 13:45:29.979575    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w64rp\" (UniqueName: \"kubernetes.io/projected/b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e-kube-api-access-w64rp\") pod \"coredns-66bc5c9577-bl59n\" (UID: \"b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e\") " pod="kube-system/coredns-66bc5c9577-bl59n"
	Dec 13 13:45:29 embed-certs-973953 kubelet[1322]: I1213 13:45:29.979592    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e-config-volume\") pod \"coredns-66bc5c9577-bl59n\" (UID: \"b6e3ac25-b7ec-49c7-b8a1-b37a1adcdd5e\") " pod="kube-system/coredns-66bc5c9577-bl59n"
	Dec 13 13:45:30 embed-certs-973953 kubelet[1322]: I1213 13:45:30.686329    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bl59n" podStartSLOduration=11.686304901 podStartE2EDuration="11.686304901s" podCreationTimestamp="2025-12-13 13:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:30.659925561 +0000 UTC m=+17.189254941" watchObservedRunningTime="2025-12-13 13:45:30.686304901 +0000 UTC m=+17.215634298"
	Dec 13 13:45:30 embed-certs-973953 kubelet[1322]: I1213 13:45:30.702545    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.702517914 podStartE2EDuration="11.702517914s" podCreationTimestamp="2025-12-13 13:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:30.687245514 +0000 UTC m=+17.216574912" watchObservedRunningTime="2025-12-13 13:45:30.702517914 +0000 UTC m=+17.231847312"
	Dec 13 13:45:32 embed-certs-973953 kubelet[1322]: I1213 13:45:32.700050    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2m2\" (UniqueName: \"kubernetes.io/projected/c0f02bd5-9f45-405f-81f1-b1df3e55d90d-kube-api-access-ql2m2\") pod \"busybox\" (UID: \"c0f02bd5-9f45-405f-81f1-b1df3e55d90d\") " pod="default/busybox"
	Dec 13 13:45:33 embed-certs-973953 kubelet[1322]: I1213 13:45:33.659389    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.03829289 podStartE2EDuration="1.659368741s" podCreationTimestamp="2025-12-13 13:45:32 +0000 UTC" firstStartedPulling="2025-12-13 13:45:32.951520808 +0000 UTC m=+19.480850188" lastFinishedPulling="2025-12-13 13:45:33.572596658 +0000 UTC m=+20.101926039" observedRunningTime="2025-12-13 13:45:33.659224471 +0000 UTC m=+20.188553870" watchObservedRunningTime="2025-12-13 13:45:33.659368741 +0000 UTC m=+20.188698140"
	
	
	==> storage-provisioner [cad06630c539bbebaaaf22ad80fdbc9283c7e907db8f51971569500cfb367e81] <==
	I1213 13:45:30.299348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:45:30.311987       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:45:30.312069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:45:30.314834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:30.322156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:45:30.322356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:45:30.322933       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-973953_6eb55064-6930-49f7-af90-be91fe400b37!
	I1213 13:45:30.323465       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f4fe34c-f4c1-4423-bf81-d96ad4a8dd1c", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-973953_6eb55064-6930-49f7-af90-be91fe400b37 became leader
	W1213 13:45:30.326296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:30.331808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:45:30.423219       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-973953_6eb55064-6930-49f7-af90-be91fe400b37!
	W1213 13:45:32.334703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:32.338698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:34.342081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:34.345801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:36.348592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:36.353238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:38.356833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:38.361128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:40.365006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:45:40.368723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-973953 -n embed-certs-973953
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-973953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (279.388436ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-038239 describe deploy/metrics-server -n kube-system: exit status 1 (64.932238ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-038239 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-038239
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-038239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da",
	        "Created": "2025-12-13T13:45:28.121473239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:45:28.15686556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/hostname",
	        "HostsPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/hosts",
	        "LogPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da-json.log",
	        "Name": "/default-k8s-diff-port-038239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-038239:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-038239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da",
	                "LowerDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-038239",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-038239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-038239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-038239",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-038239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2f210604d4fc76a72257c54d944a46db60abf69a570f8b1aea0b43e2b6deba09",
	            "SandboxKey": "/var/run/docker/netns/2f210604d4fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-038239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "251672d224afb460f1a8362b4545aae5d977bdecd5cdddf5909169b2b5623ddc",
	                    "EndpointID": "fd7a581b1fc3aa306f7c947f08a3dfb5cc40566b7f2d9e8d47c4804bdf20ba29",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6a:49:16:26:19:34",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-038239",
	                        "284f8c641cab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-038239 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-038239 logs -n 25: (1.048505522s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-884214 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ ssh     │ -p bridge-884214 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo containerd config dump                                                                                                                                                                                                  │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                               │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:00.986143  726383 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:00.986433  726383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:00.986444  726383 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:00.986450  726383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:00.986712  726383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:00.987300  726383 out.go:368] Setting JSON to false
	I1213 13:46:00.988756  726383 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8909,"bootTime":1765624652,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:00.988860  726383 start.go:143] virtualization: kvm guest
	I1213 13:46:00.991094  726383 out.go:179] * [embed-certs-973953] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:00.992672  726383 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:00.992673  726383 notify.go:221] Checking for updates...
	I1213 13:46:00.995236  726383 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:00.997926  726383 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:01.002106  726383 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:01.003629  726383 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:01.004748  726383 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:01.006679  726383 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:01.007539  726383 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:01.032115  726383 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:01.032203  726383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:01.089835  726383 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:46:01.079652712 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:01.090014  726383 docker.go:319] overlay module found
	I1213 13:46:01.092060  726383 out.go:179] * Using the docker driver based on existing profile
	I1213 13:46:01.093198  726383 start.go:309] selected driver: docker
	I1213 13:46:01.093225  726383 start.go:927] validating driver "docker" against &{Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:01.093348  726383 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:01.093961  726383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:01.149570  726383 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 13:46:01.13928888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:01.149881  726383 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:01.149911  726383 cni.go:84] Creating CNI manager for ""
	I1213 13:46:01.149970  726383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:01.150003  726383 start.go:353] cluster config:
	{Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:01.152474  726383 out.go:179] * Starting "embed-certs-973953" primary control-plane node in "embed-certs-973953" cluster
	I1213 13:46:01.153638  726383 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:01.154830  726383 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:01.156008  726383 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:46:01.156041  726383 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:01.156051  726383 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:01.156122  726383 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:01.156170  726383 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:01.156186  726383 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:46:01.156311  726383 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/config.json ...
	I1213 13:46:01.178101  726383 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:01.178129  726383 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:01.178147  726383 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:01.178214  726383 start.go:360] acquireMachinesLock for embed-certs-973953: {Name:mk9bf136673a37f733c3ece23bc4966d2c2ebc12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:01.178286  726383 start.go:364] duration metric: took 47.92µs to acquireMachinesLock for "embed-certs-973953"
	I1213 13:46:01.178305  726383 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:46:01.178310  726383 fix.go:54] fixHost starting: 
	I1213 13:46:01.178578  726383 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:01.198322  726383 fix.go:112] recreateIfNeeded on embed-certs-973953: state=Stopped err=<nil>
	W1213 13:46:01.198348  726383 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:45:57.557707  723278 addons.go:530] duration metric: took 2.645995764s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 13:45:58.049376  723278 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 13:45:58.056079  723278 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 13:45:58.057068  723278 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 13:45:58.057097  723278 api_server.go:131] duration metric: took 508.227585ms to wait for apiserver health ...
	I1213 13:45:58.057108  723278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:45:58.060520  723278 system_pods.go:59] 8 kube-system pods found
	I1213 13:45:58.060548  723278 system_pods.go:61] "coredns-7d764666f9-qfkgp" [e45b3622-b224-4b7b-9c34-944ef33db069] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:45:58.060557  723278 system_pods.go:61] "etcd-no-preload-992258" [634b3fb0-ec16-4fbc-9cd5-db533c5a6db3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:45:58.060567  723278 system_pods.go:61] "kindnet-2n8ks" [5ad48c43-8809-454c-b27a-3314da93c63d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:45:58.060583  723278 system_pods.go:61] "kube-apiserver-no-preload-992258" [af9b5b62-7205-4f43-97d5-20772e622e37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:45:58.060595  723278 system_pods.go:61] "kube-controller-manager-no-preload-992258" [3f72c682-2a32-42df-ba1e-ee1d715b84fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:45:58.060603  723278 system_pods.go:61] "kube-proxy-sjrzk" [994b46c4-47b5-4866-b1d3-8e6f0fde5c93] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:45:58.060609  723278 system_pods.go:61] "kube-scheduler-no-preload-992258" [ba280048-8def-4a23-9c88-265c69d1bb9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:45:58.060615  723278 system_pods.go:61] "storage-provisioner" [874ca672-a250-424b-94e2-6bdf29132823] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:45:58.060620  723278 system_pods.go:74] duration metric: took 3.506298ms to wait for pod list to return data ...
	I1213 13:45:58.060629  723278 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:45:58.062755  723278 default_sa.go:45] found service account: "default"
	I1213 13:45:58.062787  723278 default_sa.go:55] duration metric: took 2.138834ms for default service account to be created ...
	I1213 13:45:58.062798  723278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:45:58.065157  723278 system_pods.go:86] 8 kube-system pods found
	I1213 13:45:58.065182  723278 system_pods.go:89] "coredns-7d764666f9-qfkgp" [e45b3622-b224-4b7b-9c34-944ef33db069] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:45:58.065189  723278 system_pods.go:89] "etcd-no-preload-992258" [634b3fb0-ec16-4fbc-9cd5-db533c5a6db3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:45:58.065196  723278 system_pods.go:89] "kindnet-2n8ks" [5ad48c43-8809-454c-b27a-3314da93c63d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:45:58.065211  723278 system_pods.go:89] "kube-apiserver-no-preload-992258" [af9b5b62-7205-4f43-97d5-20772e622e37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:45:58.065226  723278 system_pods.go:89] "kube-controller-manager-no-preload-992258" [3f72c682-2a32-42df-ba1e-ee1d715b84fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:45:58.065235  723278 system_pods.go:89] "kube-proxy-sjrzk" [994b46c4-47b5-4866-b1d3-8e6f0fde5c93] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:45:58.065241  723278 system_pods.go:89] "kube-scheduler-no-preload-992258" [ba280048-8def-4a23-9c88-265c69d1bb9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:45:58.065249  723278 system_pods.go:89] "storage-provisioner" [874ca672-a250-424b-94e2-6bdf29132823] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:45:58.065256  723278 system_pods.go:126] duration metric: took 2.452579ms to wait for k8s-apps to be running ...
	I1213 13:45:58.065264  723278 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:45:58.065305  723278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:45:58.078108  723278 system_svc.go:56] duration metric: took 12.83579ms WaitForService to wait for kubelet
	I1213 13:45:58.078131  723278 kubeadm.go:587] duration metric: took 3.166769013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:45:58.078154  723278 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:45:58.080477  723278 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:45:58.080499  723278 node_conditions.go:123] node cpu capacity is 8
	I1213 13:45:58.080515  723278 node_conditions.go:105] duration metric: took 2.355375ms to run NodePressure ...
	I1213 13:45:58.080526  723278 start.go:242] waiting for startup goroutines ...
	I1213 13:45:58.080532  723278 start.go:247] waiting for cluster config update ...
	I1213 13:45:58.080546  723278 start.go:256] writing updated cluster config ...
	I1213 13:45:58.080792  723278 ssh_runner.go:195] Run: rm -f paused
	I1213 13:45:58.084163  723278 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:45:58.086962  723278 pod_ready.go:83] waiting for pod "coredns-7d764666f9-qfkgp" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 13:46:00.094762  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	I1213 13:46:00.198920  716415 node_ready.go:49] node "default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:00.198949  716415 node_ready.go:38] duration metric: took 11.503406019s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:00.198975  716415 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:00.199023  716415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:00.227974  716415 api_server.go:72] duration metric: took 11.81112409s to wait for apiserver process to appear ...
	I1213 13:46:00.227998  716415 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:00.228017  716415 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:00.232293  716415 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1213 13:46:00.233429  716415 api_server.go:141] control plane version: v1.34.2
	I1213 13:46:00.233455  716415 api_server.go:131] duration metric: took 5.45001ms to wait for apiserver health ...
	I1213 13:46:00.233467  716415 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:46:00.237234  716415 system_pods.go:59] 8 kube-system pods found
	I1213 13:46:00.237278  716415 system_pods.go:61] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:00.237287  716415 system_pods.go:61] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running
	I1213 13:46:00.237296  716415 system_pods.go:61] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:00.237302  716415 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running
	I1213 13:46:00.237312  716415 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running
	I1213 13:46:00.237322  716415 system_pods.go:61] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:00.237338  716415 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running
	I1213 13:46:00.237349  716415 system_pods.go:61] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:46:00.237359  716415 system_pods.go:74] duration metric: took 3.885244ms to wait for pod list to return data ...
	I1213 13:46:00.237372  716415 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:46:00.239741  716415 default_sa.go:45] found service account: "default"
	I1213 13:46:00.239759  716415 default_sa.go:55] duration metric: took 2.37235ms for default service account to be created ...
	I1213 13:46:00.239768  716415 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:46:00.242540  716415 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:00.242576  716415 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:00.242584  716415 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running
	I1213 13:46:00.242592  716415 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:00.242597  716415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running
	I1213 13:46:00.242603  716415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running
	I1213 13:46:00.242612  716415 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:00.242618  716415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running
	I1213 13:46:00.242626  716415 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:46:00.242654  716415 retry.go:31] will retry after 274.109241ms: missing components: kube-dns
	I1213 13:46:00.556966  716415 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:00.557004  716415 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:00.557013  716415 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running
	I1213 13:46:00.557023  716415 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:00.557028  716415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running
	I1213 13:46:00.557034  716415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running
	I1213 13:46:00.557039  716415 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:00.557044  716415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running
	I1213 13:46:00.557051  716415 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:46:00.557071  716415 retry.go:31] will retry after 235.536121ms: missing components: kube-dns
	I1213 13:46:00.798132  716415 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:00.798173  716415 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:00.798186  716415 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running
	I1213 13:46:00.798198  716415 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:00.798208  716415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running
	I1213 13:46:00.798218  716415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running
	I1213 13:46:00.798228  716415 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:00.798237  716415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running
	I1213 13:46:00.798247  716415 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 13:46:00.798271  716415 retry.go:31] will retry after 378.510169ms: missing components: kube-dns
	I1213 13:46:01.182013  716415 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:01.182045  716415 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running
	I1213 13:46:01.182053  716415 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running
	I1213 13:46:01.182059  716415 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:01.182065  716415 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running
	I1213 13:46:01.182070  716415 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running
	I1213 13:46:01.182076  716415 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:01.182081  716415 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running
	I1213 13:46:01.182085  716415 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:01.182094  716415 system_pods.go:126] duration metric: took 942.318198ms to wait for k8s-apps to be running ...
	I1213 13:46:01.182119  716415 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:46:01.182170  716415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:01.195644  716415 system_svc.go:56] duration metric: took 13.525956ms WaitForService to wait for kubelet
	I1213 13:46:01.195670  716415 kubeadm.go:587] duration metric: took 12.778828294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:01.195688  716415 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:46:01.198903  716415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:46:01.198933  716415 node_conditions.go:123] node cpu capacity is 8
	I1213 13:46:01.198957  716415 node_conditions.go:105] duration metric: took 3.26212ms to run NodePressure ...
	I1213 13:46:01.198976  716415 start.go:242] waiting for startup goroutines ...
	I1213 13:46:01.198987  716415 start.go:247] waiting for cluster config update ...
	I1213 13:46:01.198999  716415 start.go:256] writing updated cluster config ...
	I1213 13:46:01.199686  716415 ssh_runner.go:195] Run: rm -f paused
	I1213 13:46:01.203628  716415 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:01.207666  716415 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.212183  716415 pod_ready.go:94] pod "coredns-66bc5c9577-tzzmx" is "Ready"
	I1213 13:46:01.212203  716415 pod_ready.go:86] duration metric: took 4.51262ms for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.214298  716415 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.218672  716415 pod_ready.go:94] pod "etcd-default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:01.218694  716415 pod_ready.go:86] duration metric: took 4.373924ms for pod "etcd-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.220976  716415 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.225310  716415 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:01.225334  716415 pod_ready.go:86] duration metric: took 4.336647ms for pod "kube-apiserver-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.281142  716415 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.608560  716415 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:01.608589  716415 pod_ready.go:86] duration metric: took 327.415577ms for pod "kube-controller-manager-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:01.809288  716415 pod_ready.go:83] waiting for pod "kube-proxy-lzwfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:02.208495  716415 pod_ready.go:94] pod "kube-proxy-lzwfg" is "Ready"
	I1213 13:46:02.208522  716415 pod_ready.go:86] duration metric: took 399.204123ms for pod "kube-proxy-lzwfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:02.409361  716415 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:02.808527  716415 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:02.808561  716415 pod_ready.go:86] duration metric: took 399.172618ms for pod "kube-scheduler-default-k8s-diff-port-038239" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:02.808577  716415 pod_ready.go:40] duration metric: took 1.604919744s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:02.868894  716415 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:46:02.871056  716415 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-038239" cluster and "default" namespace by default
	W1213 13:46:00.905939  717532 pod_ready.go:104] pod "coredns-5dd5756b68-88x45" is not "Ready", error: <nil>
	W1213 13:46:02.907854  717532 pod_ready.go:104] pod "coredns-5dd5756b68-88x45" is not "Ready", error: <nil>
	I1213 13:46:01.200044  726383 out.go:252] * Restarting existing docker container for "embed-certs-973953" ...
	I1213 13:46:01.200112  726383 cli_runner.go:164] Run: docker start embed-certs-973953
	I1213 13:46:01.445932  726383 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:01.464634  726383 kic.go:430] container "embed-certs-973953" state is running.
	I1213 13:46:01.464996  726383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-973953
	I1213 13:46:01.483084  726383 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/config.json ...
	I1213 13:46:01.483325  726383 machine.go:94] provisionDockerMachine start ...
	I1213 13:46:01.483412  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:01.501903  726383 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:01.502164  726383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 13:46:01.502180  726383 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:46:01.502878  726383 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60088->127.0.0.1:33504: read: connection reset by peer
	I1213 13:46:04.655623  726383 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-973953
	
	I1213 13:46:04.655650  726383 ubuntu.go:182] provisioning hostname "embed-certs-973953"
	I1213 13:46:04.655718  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:04.680720  726383 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:04.681063  726383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 13:46:04.681084  726383 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-973953 && echo "embed-certs-973953" | sudo tee /etc/hostname
	I1213 13:46:04.847524  726383 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-973953
	
	I1213 13:46:04.847633  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:04.871399  726383 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:04.871807  726383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 13:46:04.871845  726383 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-973953' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-973953/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-973953' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:46:05.019367  726383 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:46:05.019400  726383 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:46:05.019454  726383 ubuntu.go:190] setting up certificates
	I1213 13:46:05.019467  726383 provision.go:84] configureAuth start
	I1213 13:46:05.019541  726383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-973953
	I1213 13:46:05.041999  726383 provision.go:143] copyHostCerts
	I1213 13:46:05.042077  726383 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:46:05.042112  726383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:46:05.042180  726383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:46:05.042298  726383 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:46:05.042313  726383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:46:05.042352  726383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:46:05.042409  726383 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:46:05.042416  726383 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:46:05.042440  726383 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:46:05.042488  726383 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.embed-certs-973953 san=[127.0.0.1 192.168.103.2 embed-certs-973953 localhost minikube]
	I1213 13:46:05.140277  726383 provision.go:177] copyRemoteCerts
	I1213 13:46:05.140359  726383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:46:05.140409  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:05.163749  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:05.272159  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:46:05.292920  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:46:05.312338  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:46:05.331754  726383 provision.go:87] duration metric: took 312.257487ms to configureAuth
	I1213 13:46:05.331822  726383 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:46:05.332022  726383 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:05.332138  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:05.352518  726383 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:05.352913  726383 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 13:46:05.352937  726383 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1213 13:46:02.593749  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	W1213 13:46:04.594846  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	I1213 13:46:06.782518  726383 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:46:06.782545  726383 machine.go:97] duration metric: took 5.29920512s to provisionDockerMachine
	I1213 13:46:06.782560  726383 start.go:293] postStartSetup for "embed-certs-973953" (driver="docker")
	I1213 13:46:06.782582  726383 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:46:06.782667  726383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:46:06.782718  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:06.804357  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:06.908851  726383 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:46:06.913161  726383 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:46:06.913194  726383 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:46:06.913209  726383 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:46:06.913263  726383 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:46:06.913359  726383 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:46:06.913482  726383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:46:06.921698  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:06.940155  726383 start.go:296] duration metric: took 157.574321ms for postStartSetup
	I1213 13:46:06.940238  726383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:46:06.940281  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:06.958621  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:07.050572  726383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:46:07.055078  726383 fix.go:56] duration metric: took 5.876760162s for fixHost
	I1213 13:46:07.055110  726383 start.go:83] releasing machines lock for "embed-certs-973953", held for 5.876811675s
	I1213 13:46:07.055169  726383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-973953
	I1213 13:46:07.072606  726383 ssh_runner.go:195] Run: cat /version.json
	I1213 13:46:07.072650  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:07.072661  726383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:46:07.072732  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:07.092376  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:07.092609  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:07.249914  726383 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:07.256641  726383 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:46:07.292527  726383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:46:07.297384  726383 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:46:07.297445  726383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:46:07.305520  726383 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:46:07.305549  726383 start.go:496] detecting cgroup driver to use...
	I1213 13:46:07.305586  726383 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:46:07.305641  726383 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:46:07.319553  726383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:46:07.332618  726383 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:46:07.332658  726383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:46:07.346601  726383 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:46:07.358437  726383 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:46:07.435074  726383 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:46:07.523412  726383 docker.go:234] disabling docker service ...
	I1213 13:46:07.523481  726383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:46:07.537728  726383 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:46:07.549449  726383 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:46:07.627006  726383 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:46:07.707070  726383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:46:07.720013  726383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:46:07.735377  726383 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:46:07.735452  726383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.744660  726383 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:46:07.744731  726383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.754232  726383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.763225  726383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.772097  726383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:46:07.780073  726383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.788631  726383 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.796747  726383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:07.805156  726383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:46:07.812540  726383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:46:07.819975  726383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:07.900154  726383 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:46:08.051968  726383 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:46:08.052050  726383 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:46:08.056395  726383 start.go:564] Will wait 60s for crictl version
	I1213 13:46:08.056455  726383 ssh_runner.go:195] Run: which crictl
	I1213 13:46:08.060077  726383 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:46:08.084673  726383 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:46:08.084735  726383 ssh_runner.go:195] Run: crio --version
	I1213 13:46:08.113962  726383 ssh_runner.go:195] Run: crio --version
	I1213 13:46:08.142888  726383 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 13:46:05.406115  717532 pod_ready.go:104] pod "coredns-5dd5756b68-88x45" is not "Ready", error: <nil>
	W1213 13:46:07.905214  717532 pod_ready.go:104] pod "coredns-5dd5756b68-88x45" is not "Ready", error: <nil>
	I1213 13:46:08.144015  726383 cli_runner.go:164] Run: docker network inspect embed-certs-973953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:08.160624  726383 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1213 13:46:08.165258  726383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:08.176252  726383 kubeadm.go:884] updating cluster {Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:46:08.176463  726383 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:46:08.176513  726383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:08.213347  726383 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:08.213368  726383 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:46:08.213414  726383 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:08.238117  726383 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:08.238138  726383 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:46:08.238145  726383 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1213 13:46:08.238254  726383 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-973953 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:46:08.238348  726383 ssh_runner.go:195] Run: crio config
	I1213 13:46:08.284507  726383 cni.go:84] Creating CNI manager for ""
	I1213 13:46:08.284530  726383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:08.284550  726383 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 13:46:08.284580  726383 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-973953 NodeName:embed-certs-973953 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:46:08.284720  726383 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-973953"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:46:08.284821  726383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 13:46:08.293084  726383 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:46:08.293137  726383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:46:08.300555  726383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1213 13:46:08.313260  726383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 13:46:08.326076  726383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1213 13:46:08.338567  726383 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:46:08.342154  726383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:08.351917  726383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:08.432428  726383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:08.459306  726383 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953 for IP: 192.168.103.2
	I1213 13:46:08.459334  726383 certs.go:195] generating shared ca certs ...
	I1213 13:46:08.459356  726383 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:08.459508  726383 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:46:08.459547  726383 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:46:08.459558  726383 certs.go:257] generating profile certs ...
	I1213 13:46:08.459639  726383 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/client.key
	I1213 13:46:08.459685  726383 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key.a9523a89
	I1213 13:46:08.459725  726383 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.key
	I1213 13:46:08.459857  726383 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:46:08.459944  726383 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:46:08.459956  726383 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:46:08.459983  726383 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:46:08.460008  726383 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:46:08.460032  726383 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:46:08.460072  726383 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:08.460714  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:46:08.479377  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:46:08.498378  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:46:08.517186  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:46:08.540909  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 13:46:08.559974  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:46:08.576451  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:46:08.593941  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/embed-certs-973953/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 13:46:08.611019  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:46:08.628478  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:46:08.647367  726383 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:46:08.666554  726383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:46:08.678988  726383 ssh_runner.go:195] Run: openssl version
	I1213 13:46:08.686017  726383 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:46:08.693548  726383 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:46:08.700971  726383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:46:08.704374  726383 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:46:08.704415  726383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:46:08.739615  726383 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:46:08.747342  726383 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:46:08.755008  726383 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:46:08.762258  726383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:46:08.766028  726383 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:46:08.766075  726383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:46:08.800803  726383 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:08.808082  726383 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:08.816124  726383 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:46:08.823368  726383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:08.826968  726383 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:08.827011  726383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:08.862010  726383 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:46:08.869588  726383 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:46:08.873391  726383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:46:08.908584  726383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:46:08.944116  726383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:46:08.988164  726383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:46:09.031551  726383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:46:09.081966  726383 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:46:09.132613  726383 kubeadm.go:401] StartCluster: {Name:embed-certs-973953 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-973953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:09.132733  726383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:46:09.132816  726383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:46:09.160692  726383 cri.go:89] found id: "ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674"
	I1213 13:46:09.160719  726383 cri.go:89] found id: "63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669"
	I1213 13:46:09.160725  726383 cri.go:89] found id: "447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5"
	I1213 13:46:09.160729  726383 cri.go:89] found id: "628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e"
	I1213 13:46:09.160734  726383 cri.go:89] found id: ""
	I1213 13:46:09.160806  726383 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:46:09.172487  726383 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:09Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:09.172557  726383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:46:09.180472  726383 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:46:09.180490  726383 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:46:09.180538  726383 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:46:09.190263  726383 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:46:09.191298  726383 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-973953" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:09.191989  726383 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-390571/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-973953" cluster setting kubeconfig missing "embed-certs-973953" context setting]
	I1213 13:46:09.192861  726383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:09.194528  726383 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:46:09.203126  726383 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1213 13:46:09.203155  726383 kubeadm.go:602] duration metric: took 22.6578ms to restartPrimaryControlPlane
	I1213 13:46:09.203165  726383 kubeadm.go:403] duration metric: took 70.566328ms to StartCluster
	I1213 13:46:09.203183  726383 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:09.203251  726383 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:09.205553  726383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:09.205845  726383 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:46:09.205918  726383 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:46:09.205997  726383 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:09.206023  726383 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-973953"
	I1213 13:46:09.206045  726383 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-973953"
	I1213 13:46:09.206042  726383 addons.go:70] Setting dashboard=true in profile "embed-certs-973953"
	W1213 13:46:09.206057  726383 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:46:09.206077  726383 addons.go:239] Setting addon dashboard=true in "embed-certs-973953"
	W1213 13:46:09.206091  726383 addons.go:248] addon dashboard should already be in state true
	I1213 13:46:09.206101  726383 host.go:66] Checking if "embed-certs-973953" exists ...
	I1213 13:46:09.206125  726383 host.go:66] Checking if "embed-certs-973953" exists ...
	I1213 13:46:09.206045  726383 addons.go:70] Setting default-storageclass=true in profile "embed-certs-973953"
	I1213 13:46:09.206177  726383 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-973953"
	I1213 13:46:09.206465  726383 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:09.206637  726383 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:09.206649  726383 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:09.207989  726383 out.go:179] * Verifying Kubernetes components...
	I1213 13:46:09.209122  726383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:09.233031  726383 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 13:46:09.234184  726383 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:46:09.235407  726383 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 13:46:09.235454  726383 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:09.235467  726383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:46:09.235531  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:09.236485  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:46:09.236506  726383 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:46:09.236564  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:09.236861  726383 addons.go:239] Setting addon default-storageclass=true in "embed-certs-973953"
	W1213 13:46:09.236886  726383 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:46:09.236915  726383 host.go:66] Checking if "embed-certs-973953" exists ...
	I1213 13:46:09.237380  726383 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:09.269265  726383 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:09.269296  726383 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:46:09.269257  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:09.269353  726383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:09.273566  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:09.295480  726383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:09.357892  726383 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:09.372053  726383 node_ready.go:35] waiting up to 6m0s for node "embed-certs-973953" to be "Ready" ...
	I1213 13:46:09.384178  726383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:09.386637  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:46:09.386656  726383 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:46:09.401501  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:46:09.401522  726383 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:46:09.409514  726383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:09.419150  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:46:09.419171  726383 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:46:09.436956  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:46:09.436976  726383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:46:09.454008  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:46:09.454029  726383 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:46:09.467343  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:46:09.467448  726383 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:46:09.481960  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:46:09.481981  726383 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:46:09.496741  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:46:09.496764  726383 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:46:09.511059  726383 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:09.511079  726383 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:46:09.524137  726383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:10.980992  726383 node_ready.go:49] node "embed-certs-973953" is "Ready"
	I1213 13:46:10.981032  726383 node_ready.go:38] duration metric: took 1.608935418s for node "embed-certs-973953" to be "Ready" ...
	I1213 13:46:10.981051  726383 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:10.981112  726383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:11.506032  726383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.121817359s)
	I1213 13:46:11.506100  726383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.096554696s)
	I1213 13:46:11.506198  726383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98202884s)
	I1213 13:46:11.506253  726383 api_server.go:72] duration metric: took 2.30036796s to wait for apiserver process to appear ...
	I1213 13:46:11.506349  726383 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:11.506383  726383 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1213 13:46:11.507952  726383 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-973953 addons enable metrics-server
	
	I1213 13:46:11.514158  726383 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:11.514182  726383 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:11.522092  726383 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1213 13:46:07.093097  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	W1213 13:46:09.094738  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	W1213 13:46:11.593689  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 13:46:00 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:00.55312332Z" level=info msg="Starting container: 4d84e148f5f783e4738370e2e0a05bf727b2317ebb6b8b5d78328b357daa94f3" id=49605c53-63c7-4da0-936b-7bb3b272b762 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:00 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:00.555836061Z" level=info msg="Started container" PID=1885 containerID=4d84e148f5f783e4738370e2e0a05bf727b2317ebb6b8b5d78328b357daa94f3 description=kube-system/coredns-66bc5c9577-tzzmx/coredns id=49605c53-63c7-4da0-936b-7bb3b272b762 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88f107870bede8477adec5728fda7df467b750e71dbf50347da07a74c22887ea
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.380851882Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e593cd9d-ae1b-4398-ad82-ba60ca8934ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.380939894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.387536536Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ffa295128c52abfb4b4428ac7310054ae8105f27293cc8f919d854ff4ece104c UID:c7c5ad6c-b8c5-45ca-a64a-a6a035816784 NetNS:/var/run/netns/20257c86-3adc-4a8c-b972-4b7d09ea492c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00089a050}] Aliases:map[]}"
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.387576058Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.400079405Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ffa295128c52abfb4b4428ac7310054ae8105f27293cc8f919d854ff4ece104c UID:c7c5ad6c-b8c5-45ca-a64a-a6a035816784 NetNS:/var/run/netns/20257c86-3adc-4a8c-b972-4b7d09ea492c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00089a050}] Aliases:map[]}"
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.40024577Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.401219261Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.402474133Z" level=info msg="Ran pod sandbox ffa295128c52abfb4b4428ac7310054ae8105f27293cc8f919d854ff4ece104c with infra container: default/busybox/POD" id=e593cd9d-ae1b-4398-ad82-ba60ca8934ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.403824963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7a5d0699-5da5-4a57-8a7e-0b5d3fbab344 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.403961357Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7a5d0699-5da5-4a57-8a7e-0b5d3fbab344 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.404040852Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7a5d0699-5da5-4a57-8a7e-0b5d3fbab344 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.404931577Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=76eb0603-ce60-4ae1-ac3a-9272ff214c53 name=/runtime.v1.ImageService/PullImage
	Dec 13 13:46:03 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:03.406595594Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.049692621Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=76eb0603-ce60-4ae1-ac3a-9272ff214c53 name=/runtime.v1.ImageService/PullImage
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.050448609Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d7600b14-c8dc-4d7d-a498-05ebe8ff575e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.052173329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7adb4ffd-f75e-4f68-b43a-356736b8fdf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.055357909Z" level=info msg="Creating container: default/busybox/busybox" id=9d9a7be1-928a-4fca-99bc-05ecb751a7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.055479537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.060346706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.061175019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.092141786Z" level=info msg="Created container da8edd5fb615ead23d2702fd64e766f0b87949da3d2c2f8789795f94a13dfccd: default/busybox/busybox" id=9d9a7be1-928a-4fca-99bc-05ecb751a7c7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.092870131Z" level=info msg="Starting container: da8edd5fb615ead23d2702fd64e766f0b87949da3d2c2f8789795f94a13dfccd" id=010d56cc-5ec5-45ec-b471-d26d5bd64cb3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:04 default-k8s-diff-port-038239 crio[775]: time="2025-12-13T13:46:04.09496695Z" level=info msg="Started container" PID=1955 containerID=da8edd5fb615ead23d2702fd64e766f0b87949da3d2c2f8789795f94a13dfccd description=default/busybox/busybox id=010d56cc-5ec5-45ec-b471-d26d5bd64cb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffa295128c52abfb4b4428ac7310054ae8105f27293cc8f919d854ff4ece104c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	da8edd5fb615e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ffa295128c52a       busybox                                                default
	4d84e148f5f78       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   88f107870bede       coredns-66bc5c9577-tzzmx                               kube-system
	2d454fe2a583f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   1d9333e851131       storage-provisioner                                    kube-system
	dd2013fe48428       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   294f618a80b46       kube-proxy-lzwfg                                       kube-system
	8c3e7f4414912       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   e4dfd96398e21       kindnet-c65rs                                          kube-system
	27795ffe472cf       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   09726c4ee2083       kube-scheduler-default-k8s-diff-port-038239            kube-system
	9218e530c08ca       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   08bbee333a675       kube-controller-manager-default-k8s-diff-port-038239   kube-system
	65713fb0aa079       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   46bbbb5b739df       kube-apiserver-default-k8s-diff-port-038239            kube-system
	b75ba0e01b853       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   c18270ee77f50       etcd-default-k8s-diff-port-038239                      kube-system
	
	
	==> coredns [4d84e148f5f783e4738370e2e0a05bf727b2317ebb6b8b5d78328b357daa94f3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58406 - 15880 "HINFO IN 1029323316613237996.7639348335231434547. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.503267037s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-038239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-038239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=default-k8s-diff-port-038239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_45_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:45:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-038239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:03 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:03 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:03 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:03 +0000   Sat, 13 Dec 2025 13:45:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-038239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                411d424d-b720-4c82-b27f-51e7954655e7
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-tzzmx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-038239                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-c65rs                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-038239             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-038239    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-lzwfg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-038239             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-038239 event: Registered Node default-k8s-diff-port-038239 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-038239 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [b75ba0e01b853526e16f8ceeb16149811cd26452113c77d671a7fdfaf9b6eacb] <==
	{"level":"warn","ts":"2025-12-13T13:45:39.696928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.704901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.714259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.727040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.748103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.756544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.771373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.785261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.793881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.806185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.818889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.830188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.840842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.851949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.863443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.874400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.886554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.896634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.906663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.916006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.923398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.938186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.946358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:39.953211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:40.004386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:46:12 up  2:28,  0 user,  load average: 5.70, 4.16, 2.62
	Linux default-k8s-diff-port-038239 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8c3e7f441491222b1f1930d158b2a88524001ae097bbe09ee62b6e48f2124898] <==
	I1213 13:45:49.189958       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:49.190213       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 13:45:49.190358       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:49.190377       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:49.190404       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:49.391547       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:49.391593       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:49.391606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:49.391803       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:49.592324       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:49.592350       1 metrics.go:72] Registering metrics
	I1213 13:45:49.592407       1 controller.go:711] "Syncing nftables rules"
	I1213 13:45:59.395049       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:45:59.395133       1 main.go:301] handling current node
	I1213 13:46:09.394875       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:46:09.394912       1 main.go:301] handling current node
	
	
	==> kube-apiserver [65713fb0aa0799ff1da939273b189b682a36ba378a688ef9f6f1a5d89be9b773] <==
	E1213 13:45:40.692935       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1213 13:45:40.697044       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:45:40.704876       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:40.705161       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 13:45:40.713696       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:40.713805       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:45:40.895956       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:45:41.499535       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 13:45:41.503252       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:45:41.503269       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:45:41.967138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:45:42.002242       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:45:42.105197       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:45:42.111566       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1213 13:45:42.112997       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:45:42.117079       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:45:42.556667       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:45:42.932929       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:45:42.945345       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:45:42.957929       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:45:48.410234       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:48.413982       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:48.610211       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:45:48.658686       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1213 13:46:11.175104       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:40234: use of closed network connection
	
	
	==> kube-controller-manager [9218e530c08ca37d791918066f605a4f944eca0a22cc70ce9194b6283b1f13ee] <==
	I1213 13:45:47.516324       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 13:45:47.533401       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 13:45:47.555042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 13:45:47.555063       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 13:45:47.555092       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 13:45:47.555144       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:45:47.555187       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 13:45:47.555231       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:45:47.555304       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 13:45:47.555376       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 13:45:47.555510       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 13:45:47.556045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:45:47.556102       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:45:47.556157       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 13:45:47.556246       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 13:45:47.556261       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 13:45:47.556508       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 13:45:47.556822       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 13:45:47.559317       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 13:45:47.560622       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:45:47.565126       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:45:47.568354       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:45:47.578244       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 13:45:47.586632       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:46:02.499949       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dd2013fe48428846c4c16e4261acd2d16e3da725c5a5c7663bfe484190ab30fb] <==
	I1213 13:45:49.072705       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:45:49.138387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:45:49.239154       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:45:49.239191       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 13:45:49.239312       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:45:49.257885       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:49.257931       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:45:49.262930       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:45:49.263321       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:45:49.263361       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:49.265351       1 config.go:200] "Starting service config controller"
	I1213 13:45:49.265391       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:45:49.265945       1 config.go:309] "Starting node config controller"
	I1213 13:45:49.265970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:45:49.265972       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:45:49.265979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:45:49.265984       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:45:49.266144       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:45:49.266152       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:45:49.366413       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:45:49.366425       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:45:49.366462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [27795ffe472cfcdde92e826de494fa949670b9622f51a801439ebfe8dae08707] <==
	I1213 13:45:40.765637       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 13:45:40.768059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:45:40.768325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 13:45:40.768511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 13:45:40.768568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:45:40.768638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 13:45:40.768679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:45:40.768703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 13:45:40.768743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 13:45:40.768751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 13:45:40.768797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 13:45:40.770634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 13:45:40.770738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 13:45:40.770918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 13:45:40.771025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 13:45:40.771071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 13:45:40.770926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 13:45:40.771712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 13:45:40.772258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 13:45:40.771824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 13:45:41.626111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 13:45:41.742904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 13:45:41.783618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 13:45:41.795728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1213 13:45:42.365949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:45:43 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:43.823987    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-038239" podStartSLOduration=1.823967484 podStartE2EDuration="1.823967484s" podCreationTimestamp="2025-12-13 13:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:43.814059713 +0000 UTC m=+1.122328847" watchObservedRunningTime="2025-12-13 13:45:43.823967484 +0000 UTC m=+1.132236611"
	Dec 13 13:45:43 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:43.833093    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-038239" podStartSLOduration=1.8330737620000002 podStartE2EDuration="1.833073762s" podCreationTimestamp="2025-12-13 13:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:43.833024284 +0000 UTC m=+1.141293411" watchObservedRunningTime="2025-12-13 13:45:43.833073762 +0000 UTC m=+1.141342889"
	Dec 13 13:45:43 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:43.833209    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-038239" podStartSLOduration=1.833201083 podStartE2EDuration="1.833201083s" podCreationTimestamp="2025-12-13 13:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:43.823961418 +0000 UTC m=+1.132230534" watchObservedRunningTime="2025-12-13 13:45:43.833201083 +0000 UTC m=+1.141470212"
	Dec 13 13:45:43 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:43.851023    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-038239" podStartSLOduration=1.851002957 podStartE2EDuration="1.851002957s" podCreationTimestamp="2025-12-13 13:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:43.842107714 +0000 UTC m=+1.150376838" watchObservedRunningTime="2025-12-13 13:45:43.851002957 +0000 UTC m=+1.159272085"
	Dec 13 13:45:47 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:47.612632    1339 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 13:45:47 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:47.613392    1339 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.703623    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/706752fb-a589-4e6f-b710-228e3650dacd-kube-proxy\") pod \"kube-proxy-lzwfg\" (UID: \"706752fb-a589-4e6f-b710-228e3650dacd\") " pod="kube-system/kube-proxy-lzwfg"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.703716    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzxs\" (UniqueName: \"kubernetes.io/projected/706752fb-a589-4e6f-b710-228e3650dacd-kube-api-access-pqzxs\") pod \"kube-proxy-lzwfg\" (UID: \"706752fb-a589-4e6f-b710-228e3650dacd\") " pod="kube-system/kube-proxy-lzwfg"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.703800    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/706752fb-a589-4e6f-b710-228e3650dacd-lib-modules\") pod \"kube-proxy-lzwfg\" (UID: \"706752fb-a589-4e6f-b710-228e3650dacd\") " pod="kube-system/kube-proxy-lzwfg"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.703847    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/706752fb-a589-4e6f-b710-228e3650dacd-xtables-lock\") pod \"kube-proxy-lzwfg\" (UID: \"706752fb-a589-4e6f-b710-228e3650dacd\") " pod="kube-system/kube-proxy-lzwfg"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.804177    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/70da74c6-b3f7-4c93-830f-cd2e08c1a82b-cni-cfg\") pod \"kindnet-c65rs\" (UID: \"70da74c6-b3f7-4c93-830f-cd2e08c1a82b\") " pod="kube-system/kindnet-c65rs"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.804217    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70da74c6-b3f7-4c93-830f-cd2e08c1a82b-xtables-lock\") pod \"kindnet-c65rs\" (UID: \"70da74c6-b3f7-4c93-830f-cd2e08c1a82b\") " pod="kube-system/kindnet-c65rs"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.804268    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7sb\" (UniqueName: \"kubernetes.io/projected/70da74c6-b3f7-4c93-830f-cd2e08c1a82b-kube-api-access-wl7sb\") pod \"kindnet-c65rs\" (UID: \"70da74c6-b3f7-4c93-830f-cd2e08c1a82b\") " pod="kube-system/kindnet-c65rs"
	Dec 13 13:45:48 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:48.804308    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70da74c6-b3f7-4c93-830f-cd2e08c1a82b-lib-modules\") pod \"kindnet-c65rs\" (UID: \"70da74c6-b3f7-4c93-830f-cd2e08c1a82b\") " pod="kube-system/kindnet-c65rs"
	Dec 13 13:45:49 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:49.814011    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzwfg" podStartSLOduration=1.813991224 podStartE2EDuration="1.813991224s" podCreationTimestamp="2025-12-13 13:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:49.813847827 +0000 UTC m=+7.122116963" watchObservedRunningTime="2025-12-13 13:45:49.813991224 +0000 UTC m=+7.122260352"
	Dec 13 13:45:50 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:50.239332    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c65rs" podStartSLOduration=2.239310405 podStartE2EDuration="2.239310405s" podCreationTimestamp="2025-12-13 13:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:45:49.8237613 +0000 UTC m=+7.132030438" watchObservedRunningTime="2025-12-13 13:45:50.239310405 +0000 UTC m=+7.547579532"
	Dec 13 13:45:59 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:45:59.880236    1339 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 13:46:00 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:00.093745    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee84dbb0-2764-427e-aa74-2827e9ce9620-tmp\") pod \"storage-provisioner\" (UID: \"ee84dbb0-2764-427e-aa74-2827e9ce9620\") " pod="kube-system/storage-provisioner"
	Dec 13 13:46:00 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:00.093847    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/980da903-c99d-4518-9ee3-7e5a96adec7e-config-volume\") pod \"coredns-66bc5c9577-tzzmx\" (UID: \"980da903-c99d-4518-9ee3-7e5a96adec7e\") " pod="kube-system/coredns-66bc5c9577-tzzmx"
	Dec 13 13:46:00 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:00.093889    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsh5b\" (UniqueName: \"kubernetes.io/projected/ee84dbb0-2764-427e-aa74-2827e9ce9620-kube-api-access-tsh5b\") pod \"storage-provisioner\" (UID: \"ee84dbb0-2764-427e-aa74-2827e9ce9620\") " pod="kube-system/storage-provisioner"
	Dec 13 13:46:00 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:00.093923    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmch8\" (UniqueName: \"kubernetes.io/projected/980da903-c99d-4518-9ee3-7e5a96adec7e-kube-api-access-dmch8\") pod \"coredns-66bc5c9577-tzzmx\" (UID: \"980da903-c99d-4518-9ee3-7e5a96adec7e\") " pod="kube-system/coredns-66bc5c9577-tzzmx"
	Dec 13 13:46:00 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:00.843679    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tzzmx" podStartSLOduration=12.843659039 podStartE2EDuration="12.843659039s" podCreationTimestamp="2025-12-13 13:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:46:00.84311326 +0000 UTC m=+18.151382389" watchObservedRunningTime="2025-12-13 13:46:00.843659039 +0000 UTC m=+18.151928166"
	Dec 13 13:46:00 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:00.864440    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.864416944 podStartE2EDuration="12.864416944s" podCreationTimestamp="2025-12-13 13:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:46:00.854139132 +0000 UTC m=+18.162408259" watchObservedRunningTime="2025-12-13 13:46:00.864416944 +0000 UTC m=+18.172686071"
	Dec 13 13:46:03 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:03.111553    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whssx\" (UniqueName: \"kubernetes.io/projected/c7c5ad6c-b8c5-45ca-a64a-a6a035816784-kube-api-access-whssx\") pod \"busybox\" (UID: \"c7c5ad6c-b8c5-45ca-a64a-a6a035816784\") " pod="default/busybox"
	Dec 13 13:46:04 default-k8s-diff-port-038239 kubelet[1339]: I1213 13:46:04.856290    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.209314839 podStartE2EDuration="1.85626633s" podCreationTimestamp="2025-12-13 13:46:03 +0000 UTC" firstStartedPulling="2025-12-13 13:46:03.404385732 +0000 UTC m=+20.712654840" lastFinishedPulling="2025-12-13 13:46:04.05133721 +0000 UTC m=+21.359606331" observedRunningTime="2025-12-13 13:46:04.856100209 +0000 UTC m=+22.164369336" watchObservedRunningTime="2025-12-13 13:46:04.85626633 +0000 UTC m=+22.164535457"
	
	
	==> storage-provisioner [2d454fe2a583fd8cc6e8f60c838b8813c11204b991a0a364d185fb40c1f7741c] <==
	I1213 13:46:00.460699       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:46:00.470223       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:00.470276       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:46:00.502459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:00.554002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:46:00.554199       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:46:00.554405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038239_bcb21146-7740-4c94-9a04-74f6f4316348!
	I1213 13:46:00.554721       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"164d55c5-fd3d-4e0d-b772-31680a1bef78", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-038239_bcb21146-7740-4c94-9a04-74f6f4316348 became leader
	W1213 13:46:00.633280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:00.637512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:46:00.654897       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038239_bcb21146-7740-4c94-9a04-74f6f4316348!
	W1213 13:46:02.641951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:02.646369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:04.650370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:04.658003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:06.661527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:06.677054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:08.680944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:08.685186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:10.688689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:10.692844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:12.696644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:12.700398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-417583 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-417583 --alsologtostderr -v=1: exit status 80 (1.725643542s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-417583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:46:30.148914  730755 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:30.149335  730755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:30.149346  730755 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:30.149350  730755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:30.149549  730755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:30.149805  730755 out.go:368] Setting JSON to false
	I1213 13:46:30.149827  730755 mustload.go:66] Loading cluster: old-k8s-version-417583
	I1213 13:46:30.150177  730755 config.go:182] Loaded profile config "old-k8s-version-417583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 13:46:30.150555  730755 cli_runner.go:164] Run: docker container inspect old-k8s-version-417583 --format={{.State.Status}}
	I1213 13:46:30.170656  730755 host.go:66] Checking if "old-k8s-version-417583" exists ...
	I1213 13:46:30.171009  730755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:30.232597  730755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:30.223321674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:30.233233  730755 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-417583 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 13:46:30.234984  730755 out.go:179] * Pausing node old-k8s-version-417583 ... 
	I1213 13:46:30.236114  730755 host.go:66] Checking if "old-k8s-version-417583" exists ...
	I1213 13:46:30.236375  730755 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:30.236430  730755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-417583
	I1213 13:46:30.255260  730755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/old-k8s-version-417583/id_rsa Username:docker}
	I1213 13:46:30.352789  730755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:30.377476  730755 pause.go:52] kubelet running: true
	I1213 13:46:30.377544  730755 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:30.564244  730755 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:30.564344  730755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:30.639452  730755 cri.go:89] found id: "bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a"
	I1213 13:46:30.639477  730755 cri.go:89] found id: "6fd50c4030ae99114f360bf8bf8e003c0917f31e66bec58d3076b945801acd5b"
	I1213 13:46:30.639483  730755 cri.go:89] found id: "2069ed533a8ad52737dcef451b6d099922803eecb51937c753798519a94d95e8"
	I1213 13:46:30.639488  730755 cri.go:89] found id: "8d59436ad6e8aefa4049cee6ffe796b2e48a4ce118db989602db0224a027bf31"
	I1213 13:46:30.639493  730755 cri.go:89] found id: "79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24"
	I1213 13:46:30.639498  730755 cri.go:89] found id: "682fe66dfbdf3e1e235c5a788a0304e2256519646f7b610b234ee76910a815c4"
	I1213 13:46:30.639503  730755 cri.go:89] found id: "50199bb0f2355e999cd87d325a8063909be474aea9edf7a8e719fb56e8183d8d"
	I1213 13:46:30.639508  730755 cri.go:89] found id: "8da5fd67633a606e436724c6c76834926bff7b7f1601133a881869ee1a6ef0e1"
	I1213 13:46:30.639513  730755 cri.go:89] found id: "2f447f41ac211953c99934f154aa22a56bee7630e2c5ef5666482cf2393ce32c"
	I1213 13:46:30.639522  730755 cri.go:89] found id: "81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	I1213 13:46:30.639530  730755 cri.go:89] found id: "b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b"
	I1213 13:46:30.639535  730755 cri.go:89] found id: ""
	I1213 13:46:30.639605  730755 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:30.652060  730755 retry.go:31] will retry after 235.249109ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:30Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:30.887511  730755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:30.903011  730755 pause.go:52] kubelet running: false
	I1213 13:46:30.903066  730755 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:31.067063  730755 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:31.067146  730755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:31.144448  730755 cri.go:89] found id: "bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a"
	I1213 13:46:31.144477  730755 cri.go:89] found id: "6fd50c4030ae99114f360bf8bf8e003c0917f31e66bec58d3076b945801acd5b"
	I1213 13:46:31.144484  730755 cri.go:89] found id: "2069ed533a8ad52737dcef451b6d099922803eecb51937c753798519a94d95e8"
	I1213 13:46:31.144489  730755 cri.go:89] found id: "8d59436ad6e8aefa4049cee6ffe796b2e48a4ce118db989602db0224a027bf31"
	I1213 13:46:31.144494  730755 cri.go:89] found id: "79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24"
	I1213 13:46:31.144499  730755 cri.go:89] found id: "682fe66dfbdf3e1e235c5a788a0304e2256519646f7b610b234ee76910a815c4"
	I1213 13:46:31.144504  730755 cri.go:89] found id: "50199bb0f2355e999cd87d325a8063909be474aea9edf7a8e719fb56e8183d8d"
	I1213 13:46:31.144508  730755 cri.go:89] found id: "8da5fd67633a606e436724c6c76834926bff7b7f1601133a881869ee1a6ef0e1"
	I1213 13:46:31.144512  730755 cri.go:89] found id: "2f447f41ac211953c99934f154aa22a56bee7630e2c5ef5666482cf2393ce32c"
	I1213 13:46:31.144521  730755 cri.go:89] found id: "81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	I1213 13:46:31.144528  730755 cri.go:89] found id: "b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b"
	I1213 13:46:31.144532  730755 cri.go:89] found id: ""
	I1213 13:46:31.144580  730755 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:31.156134  730755 retry.go:31] will retry after 407.565689ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:31Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:31.564806  730755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:31.577918  730755 pause.go:52] kubelet running: false
	I1213 13:46:31.577972  730755 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:31.717554  730755 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:31.717676  730755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:31.785930  730755 cri.go:89] found id: "bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a"
	I1213 13:46:31.785959  730755 cri.go:89] found id: "6fd50c4030ae99114f360bf8bf8e003c0917f31e66bec58d3076b945801acd5b"
	I1213 13:46:31.785963  730755 cri.go:89] found id: "2069ed533a8ad52737dcef451b6d099922803eecb51937c753798519a94d95e8"
	I1213 13:46:31.785968  730755 cri.go:89] found id: "8d59436ad6e8aefa4049cee6ffe796b2e48a4ce118db989602db0224a027bf31"
	I1213 13:46:31.785982  730755 cri.go:89] found id: "79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24"
	I1213 13:46:31.785988  730755 cri.go:89] found id: "682fe66dfbdf3e1e235c5a788a0304e2256519646f7b610b234ee76910a815c4"
	I1213 13:46:31.785993  730755 cri.go:89] found id: "50199bb0f2355e999cd87d325a8063909be474aea9edf7a8e719fb56e8183d8d"
	I1213 13:46:31.785998  730755 cri.go:89] found id: "8da5fd67633a606e436724c6c76834926bff7b7f1601133a881869ee1a6ef0e1"
	I1213 13:46:31.786002  730755 cri.go:89] found id: "2f447f41ac211953c99934f154aa22a56bee7630e2c5ef5666482cf2393ce32c"
	I1213 13:46:31.786011  730755 cri.go:89] found id: "81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	I1213 13:46:31.786020  730755 cri.go:89] found id: "b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b"
	I1213 13:46:31.786024  730755 cri.go:89] found id: ""
	I1213 13:46:31.786085  730755 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:31.799924  730755 out.go:203] 
	W1213 13:46:31.801118  730755 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:46:31.801143  730755 out.go:285] * 
	* 
	W1213 13:46:31.805989  730755 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:46:31.807211  730755 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-417583 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-417583
helpers_test.go:244: (dbg) docker inspect old-k8s-version-417583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6",
	        "Created": "2025-12-13T13:44:18.6267097Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:45:28.999915023Z",
	            "FinishedAt": "2025-12-13T13:45:27.746416778Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/hostname",
	        "HostsPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/hosts",
	        "LogPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6-json.log",
	        "Name": "/old-k8s-version-417583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-417583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-417583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6",
	                "LowerDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-417583",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-417583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-417583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-417583",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-417583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c385a384d513c7274407c0e74cbf6692b59d34915e0cb33c1d5838a3a4864a5d",
	            "SandboxKey": "/var/run/docker/netns/c385a384d513",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-417583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cde7b54cbcc8a3b8ab40bd9dd21786e91e0af49dc344d306865f5245da4b5481",
	                    "EndpointID": "c7ff994ea3d7e3f61604c4618d8b21644ef13ccaa288cd01e477214a0df0b6f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:f4:0e:96:bc:bb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-417583",
	                        "43fbdd9bc16f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583: exit status 2 (325.537233ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-417583 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-417583 logs -n 25: (1.052035823s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-884214 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo containerd config dump                                                                                                                                                                                                  │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                               │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                               │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:30.410503  730912 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:30.410810  730912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:30.410832  730912 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:30.410840  730912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:30.411128  730912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:30.411641  730912 out.go:368] Setting JSON to false
	I1213 13:46:30.412992  730912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8938,"bootTime":1765624652,"procs":401,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:30.413078  730912 start.go:143] virtualization: kvm guest
	I1213 13:46:30.415169  730912 out.go:179] * [default-k8s-diff-port-038239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:30.419740  730912 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:30.419894  730912 notify.go:221] Checking for updates...
	I1213 13:46:30.422660  730912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:30.423897  730912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:30.425345  730912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:30.426581  730912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:30.427902  730912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:30.429961  730912 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:30.430543  730912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:30.457899  730912 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:30.458019  730912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:30.516374  730912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:30.504557936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:30.516508  730912 docker.go:319] overlay module found
	I1213 13:46:30.518220  730912 out.go:179] * Using the docker driver based on existing profile
	I1213 13:46:30.519395  730912 start.go:309] selected driver: docker
	I1213 13:46:30.519412  730912 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-038239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:30.519551  730912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:30.520288  730912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:30.581651  730912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:30.57171943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:30.581988  730912 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:30.582017  730912 cni.go:84] Creating CNI manager for ""
	I1213 13:46:30.582069  730912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:30.582100  730912 start.go:353] cluster config:
	{Name:default-k8s-diff-port-038239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:30.584942  730912 out.go:179] * Starting "default-k8s-diff-port-038239" primary control-plane node in "default-k8s-diff-port-038239" cluster
	I1213 13:46:30.586068  730912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:30.587291  730912 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:30.588425  730912 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:46:30.588464  730912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:30.588492  730912 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:30.588536  730912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:30.588615  730912 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:30.588644  730912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:46:30.588809  730912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/config.json ...
	I1213 13:46:30.612081  730912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:30.612103  730912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:30.612121  730912 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:30.612159  730912 start.go:360] acquireMachinesLock for default-k8s-diff-port-038239: {Name:mk119d774bc71bc45b9aba04bf24de8110105016 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:30.612246  730912 start.go:364] duration metric: took 46.93µs to acquireMachinesLock for "default-k8s-diff-port-038239"
	I1213 13:46:30.612273  730912 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:46:30.612284  730912 fix.go:54] fixHost starting: 
	I1213 13:46:30.612586  730912 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-038239 --format={{.State.Status}}
	I1213 13:46:30.631730  730912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038239: state=Stopped err=<nil>
	W1213 13:46:30.631765  730912 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 13:46:26.551881  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:28.552434  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:28.092963  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	W1213 13:46:30.093705  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 13:45:57 old-k8s-version-417583 crio[561]: time="2025-12-13T13:45:57.361121021Z" level=info msg="Created container b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb/kubernetes-dashboard" id=42a1c4e1-b3c8-4441-a2b4-8c2fc8710b63 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:57 old-k8s-version-417583 crio[561]: time="2025-12-13T13:45:57.361733506Z" level=info msg="Starting container: b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b" id=d1fb6655-0b7f-44b5-ae56-b12cab6c3710 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:57 old-k8s-version-417583 crio[561]: time="2025-12-13T13:45:57.36370249Z" level=info msg="Started container" PID=1725 containerID=b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb/kubernetes-dashboard id=d1fb6655-0b7f-44b5-ae56-b12cab6c3710 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7caf3915baf4cd9e2e60f6b26f56740111d0843bc6d2eae7fcfbf5b695f1a6a8
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.283688346Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1eeca675-f713-43c9-8ec2-be0f7d0cda7c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.284584079Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3893e0d4-677c-45ef-ae01-5bc8a81ab223 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.285581195Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=37090f51-c3bf-436a-a967-24a015e580e0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.285728489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.29037955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.29056887Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cdb5f65721296fe2cb7ec72e85a395a140abd13bc6df920d25dfd1fbdcf073c4/merged/etc/passwd: no such file or directory"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.290610554Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cdb5f65721296fe2cb7ec72e85a395a140abd13bc6df920d25dfd1fbdcf073c4/merged/etc/group: no such file or directory"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.290936894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.326117947Z" level=info msg="Created container bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a: kube-system/storage-provisioner/storage-provisioner" id=37090f51-c3bf-436a-a967-24a015e580e0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.326870173Z" level=info msg="Starting container: bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a" id=d642e914-0535-4afe-a46d-ad7829fb52c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.329140643Z" level=info msg="Started container" PID=1748 containerID=bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a description=kube-system/storage-provisioner/storage-provisioner id=d642e914-0535-4afe-a46d-ad7829fb52c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6c458fb7e7c12a85a56f2bc0c4e2aa35e597168056027dfaac73d23655c9496
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.161654367Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=178a8ad7-ffad-4dd9-bf66-932a16243d45 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.162525245Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=64bdae50-95ce-41f1-8aab-68ca038772f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.16343927Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper" id=81f3d1cc-d618-4a45-a3c8-d5b18faadc47 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.163587067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.169287799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.169949774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.200119099Z" level=info msg="Created container 81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper" id=81f3d1cc-d618-4a45-a3c8-d5b18faadc47 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.200704989Z" level=info msg="Starting container: 81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214" id=e6a5799f-3669-40eb-bd04-105923d81b38 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.202633518Z" level=info msg="Started container" PID=1762 containerID=81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper id=e6a5799f-3669-40eb-bd04-105923d81b38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=718ddf68d0381a000f6875a8033c180ab8e3247f903899f892e8da9c55ba1660
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.296282666Z" level=info msg="Removing container: e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37" id=3b8f6de4-27d9-4d77-b476-54f6a164d22f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.307080298Z" level=info msg="Removed container e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper" id=3b8f6de4-27d9-4d77-b476-54f6a164d22f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	81a3fc4a78230       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   718ddf68d0381       dashboard-metrics-scraper-5f989dc9cf-4qxf4       kubernetes-dashboard
	bf316ec39e124       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   a6c458fb7e7c1       storage-provisioner                              kube-system
	b333c1dd58c75       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   7caf3915baf4c       kubernetes-dashboard-8694d4445c-v5gzb            kubernetes-dashboard
	6fd50c4030ae9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   a1c94b34fd13c       coredns-5dd5756b68-88x45                         kube-system
	7b338305c08d8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   3bfe9577ccd74       busybox                                          default
	2069ed533a8ad       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   5306d2f40e0d1       kube-proxy-r84xd                                 kube-system
	8d59436ad6e8a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   e533053533c1e       kindnet-qnxmc                                    kube-system
	79234c842f275       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   a6c458fb7e7c1       storage-provisioner                              kube-system
	682fe66dfbdf3       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   a7caa61b0fe82       kube-controller-manager-old-k8s-version-417583   kube-system
	50199bb0f2355       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   147d4280783fd       kube-apiserver-old-k8s-version-417583            kube-system
	8da5fd67633a6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   da7e1d3d9018a       kube-scheduler-old-k8s-version-417583            kube-system
	2f447f41ac211       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   8bca2b76237ce       etcd-old-k8s-version-417583                      kube-system
	
	
	==> coredns [6fd50c4030ae99114f360bf8bf8e003c0917f31e66bec58d3076b945801acd5b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48728 - 43055 "HINFO IN 19011016083649870.7166623993924490315. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.105440691s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-417583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-417583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=old-k8s-version-417583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-417583
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-417583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                40103729-700e-4b92-90bd-81879b0deff9
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-88x45                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-417583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-qnxmc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-417583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-417583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-r84xd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-417583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4qxf4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-v5gzb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node old-k8s-version-417583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-417583 event: Registered Node old-k8s-version-417583 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-417583 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-417583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-417583 event: Registered Node old-k8s-version-417583 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [2f447f41ac211953c99934f154aa22a56bee7630e2c5ef5666482cf2393ce32c] <==
	{"level":"info","ts":"2025-12-13T13:45:35.740027Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T13:45:35.740043Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T13:45:35.740546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-13T13:45:35.740662Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-13T13:45:35.740896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:45:35.740945Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:45:35.741997Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-13T13:45:35.742243Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-13T13:45:35.742307Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-13T13:45:35.742513Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T13:45:35.743245Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T13:45:37.227494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-13T13:45:37.227537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-13T13:45:37.227567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T13:45:37.227582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.227588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.227598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.227605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.228727Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-417583 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T13:45:37.228748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:45:37.22877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:45:37.228971Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T13:45:37.228997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T13:45:37.229985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-13T13:45:37.230131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:46:32 up  2:29,  0 user,  load average: 4.66, 4.02, 2.61
	Linux old-k8s-version-417583 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d59436ad6e8aefa4049cee6ffe796b2e48a4ce118db989602db0224a027bf31] <==
	I1213 13:45:39.804508       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:39.805058       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 13:45:39.805236       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:39.805251       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:39.805272       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:40.008012       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:40.008040       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:40.008052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:40.008174       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:40.408315       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:40.408630       1 metrics.go:72] Registering metrics
	I1213 13:45:40.408718       1 controller.go:711] "Syncing nftables rules"
	I1213 13:45:50.009466       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:45:50.009533       1 main.go:301] handling current node
	I1213 13:46:00.007885       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:00.007925       1 main.go:301] handling current node
	I1213 13:46:10.007928       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:10.007971       1 main.go:301] handling current node
	I1213 13:46:20.009071       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:20.009121       1 main.go:301] handling current node
	I1213 13:46:30.008603       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:30.008642       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50199bb0f2355e999cd87d325a8063909be474aea9edf7a8e719fb56e8183d8d] <==
	I1213 13:45:38.289810       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 13:45:38.289815       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 13:45:38.289851       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 13:45:38.289909       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 13:45:38.289998       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1213 13:45:38.290015       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 13:45:38.290255       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 13:45:38.290277       1 aggregator.go:166] initial CRD sync complete...
	I1213 13:45:38.290328       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 13:45:38.290340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:45:38.290347       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:45:38.311999       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1213 13:45:39.178269       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 13:45:39.191901       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:45:39.214336       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 13:45:39.234644       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:45:39.243598       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:45:39.253134       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 13:45:39.310369       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.130.236"}
	I1213 13:45:39.330369       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.117.237"}
	I1213 13:45:50.744765       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 13:45:50.794532       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:45:50.794533       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:45:50.896218       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 13:45:50.896218       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [682fe66dfbdf3e1e235c5a788a0304e2256519646f7b610b234ee76910a815c4] <==
	I1213 13:45:50.670238       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 13:45:50.898680       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1213 13:45:50.900179       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1213 13:45:50.905639       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-v5gzb"
	I1213 13:45:50.907041       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4qxf4"
	I1213 13:45:50.911998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.103408ms"
	I1213 13:45:50.912159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.732917ms"
	I1213 13:45:50.918503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.456751ms"
	I1213 13:45:50.918606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.488µs"
	I1213 13:45:50.920183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.971343ms"
	I1213 13:45:50.920260       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.83µs"
	I1213 13:45:50.926159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.728µs"
	I1213 13:45:50.932680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.6µs"
	I1213 13:45:50.987130       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 13:45:51.070048       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 13:45:51.070081       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 13:45:54.249519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="146.14µs"
	I1213 13:45:55.254433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.182µs"
	I1213 13:45:56.261325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.538µs"
	I1213 13:45:58.269270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.254398ms"
	I1213 13:45:58.269361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.625µs"
	I1213 13:46:13.308019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.143µs"
	I1213 13:46:16.674889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.22869ms"
	I1213 13:46:16.675250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.64µs"
	I1213 13:46:21.226724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.748µs"
	
	
	==> kube-proxy [2069ed533a8ad52737dcef451b6d099922803eecb51937c753798519a94d95e8] <==
	I1213 13:45:39.583526       1 server_others.go:69] "Using iptables proxy"
	I1213 13:45:39.594309       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1213 13:45:39.615469       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:39.618638       1 server_others.go:152] "Using iptables Proxier"
	I1213 13:45:39.618670       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 13:45:39.618677       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 13:45:39.618715       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 13:45:39.619000       1 server.go:846] "Version info" version="v1.28.0"
	I1213 13:45:39.619014       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:39.623228       1 config.go:97] "Starting endpoint slice config controller"
	I1213 13:45:39.623259       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 13:45:39.623294       1 config.go:188] "Starting service config controller"
	I1213 13:45:39.623300       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 13:45:39.623386       1 config.go:315] "Starting node config controller"
	I1213 13:45:39.623412       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 13:45:39.725652       1 shared_informer.go:318] Caches are synced for node config
	I1213 13:45:39.726336       1 shared_informer.go:318] Caches are synced for service config
	I1213 13:45:39.726408       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8da5fd67633a606e436724c6c76834926bff7b7f1601133a881869ee1a6ef0e1] <==
	I1213 13:45:36.201518       1 serving.go:348] Generated self-signed cert in-memory
	I1213 13:45:38.254246       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1213 13:45:38.254269       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:38.257876       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1213 13:45:38.257905       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1213 13:45:38.257934       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:45:38.257962       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 13:45:38.257940       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:45:38.258024       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1213 13:45:38.258630       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 13:45:38.258687       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 13:45:38.358944       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 13:45:38.358948       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1213 13:45:38.358943       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 13 13:45:50 old-k8s-version-417583 kubelet[728]: I1213 13:45:50.915476     728 topology_manager.go:215] "Topology Admit Handler" podUID="12bf6f7a-a070-4d1c-a202-1b73285ad918" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-v5gzb"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.095890     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/192ed0be-4bec-4260-b59f-129791a3f292-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4qxf4\" (UID: \"192ed0be-4bec-4260-b59f-129791a3f292\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.095951     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/12bf6f7a-a070-4d1c-a202-1b73285ad918-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-v5gzb\" (UID: \"12bf6f7a-a070-4d1c-a202-1b73285ad918\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.095990     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69sfx\" (UniqueName: \"kubernetes.io/projected/192ed0be-4bec-4260-b59f-129791a3f292-kube-api-access-69sfx\") pod \"dashboard-metrics-scraper-5f989dc9cf-4qxf4\" (UID: \"192ed0be-4bec-4260-b59f-129791a3f292\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.096073     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmp5\" (UniqueName: \"kubernetes.io/projected/12bf6f7a-a070-4d1c-a202-1b73285ad918-kube-api-access-vdmp5\") pod \"kubernetes-dashboard-8694d4445c-v5gzb\" (UID: \"12bf6f7a-a070-4d1c-a202-1b73285ad918\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb"
	Dec 13 13:45:54 old-k8s-version-417583 kubelet[728]: I1213 13:45:54.236987     728 scope.go:117] "RemoveContainer" containerID="ef2702a441a15f099e685e6dc25c44e25d54b1ecbf4cf0b4aa3f025c550cfe4b"
	Dec 13 13:45:55 old-k8s-version-417583 kubelet[728]: I1213 13:45:55.241198     728 scope.go:117] "RemoveContainer" containerID="ef2702a441a15f099e685e6dc25c44e25d54b1ecbf4cf0b4aa3f025c550cfe4b"
	Dec 13 13:45:55 old-k8s-version-417583 kubelet[728]: I1213 13:45:55.241551     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:45:55 old-k8s-version-417583 kubelet[728]: E1213 13:45:55.241950     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:45:56 old-k8s-version-417583 kubelet[728]: I1213 13:45:56.246279     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:45:56 old-k8s-version-417583 kubelet[728]: E1213 13:45:56.246765     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:45:58 old-k8s-version-417583 kubelet[728]: I1213 13:45:58.264308     728 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb" podStartSLOduration=2.17980991 podCreationTimestamp="2025-12-13 13:45:50 +0000 UTC" firstStartedPulling="2025-12-13 13:45:51.239839056 +0000 UTC m=+16.175973670" lastFinishedPulling="2025-12-13 13:45:57.324290123 +0000 UTC m=+22.260424737" observedRunningTime="2025-12-13 13:45:58.263877352 +0000 UTC m=+23.200011973" watchObservedRunningTime="2025-12-13 13:45:58.264260977 +0000 UTC m=+23.200395596"
	Dec 13 13:46:01 old-k8s-version-417583 kubelet[728]: I1213 13:46:01.216693     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:46:01 old-k8s-version-417583 kubelet[728]: E1213 13:46:01.217157     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:46:10 old-k8s-version-417583 kubelet[728]: I1213 13:46:10.283190     728 scope.go:117] "RemoveContainer" containerID="79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: I1213 13:46:13.160967     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: I1213 13:46:13.295035     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: I1213 13:46:13.295354     728 scope.go:117] "RemoveContainer" containerID="81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: E1213 13:46:13.295747     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:46:21 old-k8s-version-417583 kubelet[728]: I1213 13:46:21.216642     728 scope.go:117] "RemoveContainer" containerID="81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	Dec 13 13:46:21 old-k8s-version-417583 kubelet[728]: E1213 13:46:21.216938     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: kubelet.service: Consumed 1.576s CPU time.
	
	
	==> kubernetes-dashboard [b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b] <==
	2025/12/13 13:45:57 Starting overwatch
	2025/12/13 13:45:57 Using namespace: kubernetes-dashboard
	2025/12/13 13:45:57 Using in-cluster config to connect to apiserver
	2025/12/13 13:45:57 Using secret token for csrf signing
	2025/12/13 13:45:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:45:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:45:57 Successful initial request to the apiserver, version: v1.28.0
	2025/12/13 13:45:57 Generating JWE encryption key
	2025/12/13 13:45:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:45:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:45:57 Initializing JWE encryption key from synchronized object
	2025/12/13 13:45:57 Creating in-cluster Sidecar client
	2025/12/13 13:45:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:45:57 Serving insecurely on HTTP port: 9090
	2025/12/13 13:46:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24] <==
	I1213 13:45:39.561065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:46:09.563606       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a] <==
	I1213 13:46:10.343608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:46:10.355489       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:10.355548       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 13:46:27.752986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:46:27.753064       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0650e3a6-aaf8-4fe6-b96a-06ebf14116a7", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-417583_61b17105-db53-4799-811c-d6672220ca76 became leader
	I1213 13:46:27.753112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-417583_61b17105-db53-4799-811c-d6672220ca76!
	I1213 13:46:27.854137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-417583_61b17105-db53-4799-811c-d6672220ca76!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417583 -n old-k8s-version-417583
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417583 -n old-k8s-version-417583: exit status 2 (319.767551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-417583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-417583
helpers_test.go:244: (dbg) docker inspect old-k8s-version-417583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6",
	        "Created": "2025-12-13T13:44:18.6267097Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:45:28.999915023Z",
	            "FinishedAt": "2025-12-13T13:45:27.746416778Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/hostname",
	        "HostsPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/hosts",
	        "LogPath": "/var/lib/docker/containers/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6/43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6-json.log",
	        "Name": "/old-k8s-version-417583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-417583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-417583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43fbdd9bc16f6948cc67363ead86d4b92da73fde95dcde2a6781335bb540eae6",
	                "LowerDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fe32fc87ee53a75ed4b398af0c6f7afe0037d62c0d6677e1d539a22b32748aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-417583",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-417583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-417583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-417583",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-417583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c385a384d513c7274407c0e74cbf6692b59d34915e0cb33c1d5838a3a4864a5d",
	            "SandboxKey": "/var/run/docker/netns/c385a384d513",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-417583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cde7b54cbcc8a3b8ab40bd9dd21786e91e0af49dc344d306865f5245da4b5481",
	                    "EndpointID": "c7ff994ea3d7e3f61604c4618d8b21644ef13ccaa288cd01e477214a0df0b6f4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "82:f4:0e:96:bc:bb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-417583",
	                        "43fbdd9bc16f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583: exit status 2 (318.847376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-417583 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-417583 logs -n 25: (1.097868378s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-884214 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo containerd config dump                                                                                                                                                                                                  │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                             │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                              │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                               │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                               │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:30.410503  730912 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:30.410810  730912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:30.410832  730912 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:30.410840  730912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:30.411128  730912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:30.411641  730912 out.go:368] Setting JSON to false
	I1213 13:46:30.412992  730912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8938,"bootTime":1765624652,"procs":401,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:30.413078  730912 start.go:143] virtualization: kvm guest
	I1213 13:46:30.415169  730912 out.go:179] * [default-k8s-diff-port-038239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:30.419740  730912 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:30.419894  730912 notify.go:221] Checking for updates...
	I1213 13:46:30.422660  730912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:30.423897  730912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:30.425345  730912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:30.426581  730912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:30.427902  730912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:30.429961  730912 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:30.430543  730912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:30.457899  730912 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:30.458019  730912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:30.516374  730912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:30.504557936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:30.516508  730912 docker.go:319] overlay module found
	I1213 13:46:30.518220  730912 out.go:179] * Using the docker driver based on existing profile
	I1213 13:46:30.519395  730912 start.go:309] selected driver: docker
	I1213 13:46:30.519412  730912 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-038239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:30.519551  730912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:30.520288  730912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:30.581651  730912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:30.57171943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:30.581988  730912 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:30.582017  730912 cni.go:84] Creating CNI manager for ""
	I1213 13:46:30.582069  730912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:30.582100  730912 start.go:353] cluster config:
	{Name:default-k8s-diff-port-038239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-038239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:30.584942  730912 out.go:179] * Starting "default-k8s-diff-port-038239" primary control-plane node in "default-k8s-diff-port-038239" cluster
	I1213 13:46:30.586068  730912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:30.587291  730912 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:30.588425  730912 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 13:46:30.588464  730912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:30.588492  730912 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:30.588536  730912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:30.588615  730912 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:30.588644  730912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 13:46:30.588809  730912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/default-k8s-diff-port-038239/config.json ...
	I1213 13:46:30.612081  730912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:30.612103  730912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:30.612121  730912 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:30.612159  730912 start.go:360] acquireMachinesLock for default-k8s-diff-port-038239: {Name:mk119d774bc71bc45b9aba04bf24de8110105016 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:30.612246  730912 start.go:364] duration metric: took 46.93µs to acquireMachinesLock for "default-k8s-diff-port-038239"
	I1213 13:46:30.612273  730912 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:46:30.612284  730912 fix.go:54] fixHost starting: 
	I1213 13:46:30.612586  730912 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-038239 --format={{.State.Status}}
	I1213 13:46:30.631730  730912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-038239: state=Stopped err=<nil>
	W1213 13:46:30.631765  730912 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 13:46:26.551881  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:28.552434  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:28.092963  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	W1213 13:46:30.093705  723278 pod_ready.go:104] pod "coredns-7d764666f9-qfkgp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 13:45:57 old-k8s-version-417583 crio[561]: time="2025-12-13T13:45:57.361121021Z" level=info msg="Created container b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb/kubernetes-dashboard" id=42a1c4e1-b3c8-4441-a2b4-8c2fc8710b63 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:45:57 old-k8s-version-417583 crio[561]: time="2025-12-13T13:45:57.361733506Z" level=info msg="Starting container: b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b" id=d1fb6655-0b7f-44b5-ae56-b12cab6c3710 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:45:57 old-k8s-version-417583 crio[561]: time="2025-12-13T13:45:57.36370249Z" level=info msg="Started container" PID=1725 containerID=b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb/kubernetes-dashboard id=d1fb6655-0b7f-44b5-ae56-b12cab6c3710 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7caf3915baf4cd9e2e60f6b26f56740111d0843bc6d2eae7fcfbf5b695f1a6a8
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.283688346Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1eeca675-f713-43c9-8ec2-be0f7d0cda7c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.284584079Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3893e0d4-677c-45ef-ae01-5bc8a81ab223 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.285581195Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=37090f51-c3bf-436a-a967-24a015e580e0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.285728489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.29037955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.29056887Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cdb5f65721296fe2cb7ec72e85a395a140abd13bc6df920d25dfd1fbdcf073c4/merged/etc/passwd: no such file or directory"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.290610554Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cdb5f65721296fe2cb7ec72e85a395a140abd13bc6df920d25dfd1fbdcf073c4/merged/etc/group: no such file or directory"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.290936894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.326117947Z" level=info msg="Created container bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a: kube-system/storage-provisioner/storage-provisioner" id=37090f51-c3bf-436a-a967-24a015e580e0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.326870173Z" level=info msg="Starting container: bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a" id=d642e914-0535-4afe-a46d-ad7829fb52c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:10 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:10.329140643Z" level=info msg="Started container" PID=1748 containerID=bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a description=kube-system/storage-provisioner/storage-provisioner id=d642e914-0535-4afe-a46d-ad7829fb52c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6c458fb7e7c12a85a56f2bc0c4e2aa35e597168056027dfaac73d23655c9496
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.161654367Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=178a8ad7-ffad-4dd9-bf66-932a16243d45 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.162525245Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=64bdae50-95ce-41f1-8aab-68ca038772f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.16343927Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper" id=81f3d1cc-d618-4a45-a3c8-d5b18faadc47 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.163587067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.169287799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.169949774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.200119099Z" level=info msg="Created container 81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper" id=81f3d1cc-d618-4a45-a3c8-d5b18faadc47 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.200704989Z" level=info msg="Starting container: 81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214" id=e6a5799f-3669-40eb-bd04-105923d81b38 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.202633518Z" level=info msg="Started container" PID=1762 containerID=81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper id=e6a5799f-3669-40eb-bd04-105923d81b38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=718ddf68d0381a000f6875a8033c180ab8e3247f903899f892e8da9c55ba1660
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.296282666Z" level=info msg="Removing container: e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37" id=3b8f6de4-27d9-4d77-b476-54f6a164d22f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:13 old-k8s-version-417583 crio[561]: time="2025-12-13T13:46:13.307080298Z" level=info msg="Removed container e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4/dashboard-metrics-scraper" id=3b8f6de4-27d9-4d77-b476-54f6a164d22f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	81a3fc4a78230       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   718ddf68d0381       dashboard-metrics-scraper-5f989dc9cf-4qxf4       kubernetes-dashboard
	bf316ec39e124       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   a6c458fb7e7c1       storage-provisioner                              kube-system
	b333c1dd58c75       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   7caf3915baf4c       kubernetes-dashboard-8694d4445c-v5gzb            kubernetes-dashboard
	6fd50c4030ae9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   a1c94b34fd13c       coredns-5dd5756b68-88x45                         kube-system
	7b338305c08d8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   3bfe9577ccd74       busybox                                          default
	2069ed533a8ad       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   5306d2f40e0d1       kube-proxy-r84xd                                 kube-system
	8d59436ad6e8a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   e533053533c1e       kindnet-qnxmc                                    kube-system
	79234c842f275       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   a6c458fb7e7c1       storage-provisioner                              kube-system
	682fe66dfbdf3       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   a7caa61b0fe82       kube-controller-manager-old-k8s-version-417583   kube-system
	50199bb0f2355       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   147d4280783fd       kube-apiserver-old-k8s-version-417583            kube-system
	8da5fd67633a6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   da7e1d3d9018a       kube-scheduler-old-k8s-version-417583            kube-system
	2f447f41ac211       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   8bca2b76237ce       etcd-old-k8s-version-417583                      kube-system
	
	
	==> coredns [6fd50c4030ae99114f360bf8bf8e003c0917f31e66bec58d3076b945801acd5b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48728 - 43055 "HINFO IN 19011016083649870.7166623993924490315. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.105440691s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-417583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-417583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=old-k8s-version-417583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-417583
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:08 +0000   Sat, 13 Dec 2025 13:44:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-417583
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                40103729-700e-4b92-90bd-81879b0deff9
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-88x45                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-417583                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-qnxmc                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-417583             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-417583    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-r84xd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-417583             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-4qxf4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-v5gzb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node old-k8s-version-417583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node old-k8s-version-417583 event: Registered Node old-k8s-version-417583 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-417583 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-417583 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-417583 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-417583 event: Registered Node old-k8s-version-417583 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [2f447f41ac211953c99934f154aa22a56bee7630e2c5ef5666482cf2393ce32c] <==
	{"level":"info","ts":"2025-12-13T13:45:35.740027Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T13:45:35.740043Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T13:45:35.740546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-13T13:45:35.740662Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-13T13:45:35.740896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:45:35.740945Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T13:45:35.741997Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-13T13:45:35.742243Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-13T13:45:35.742307Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-13T13:45:35.742513Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T13:45:35.743245Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T13:45:37.227494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-13T13:45:37.227537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-13T13:45:37.227567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T13:45:37.227582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.227588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.227598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.227605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T13:45:37.228727Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-417583 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T13:45:37.228748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:45:37.22877Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T13:45:37.228971Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T13:45:37.228997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T13:45:37.229985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-13T13:45:37.230131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:46:34 up  2:29,  0 user,  load average: 4.53, 4.01, 2.61
	Linux old-k8s-version-417583 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d59436ad6e8aefa4049cee6ffe796b2e48a4ce118db989602db0224a027bf31] <==
	I1213 13:45:39.804508       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:39.805058       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 13:45:39.805236       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:39.805251       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:39.805272       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:40.008012       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:40.008040       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:40.008052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:40.008174       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:40.408315       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:40.408630       1 metrics.go:72] Registering metrics
	I1213 13:45:40.408718       1 controller.go:711] "Syncing nftables rules"
	I1213 13:45:50.009466       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:45:50.009533       1 main.go:301] handling current node
	I1213 13:46:00.007885       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:00.007925       1 main.go:301] handling current node
	I1213 13:46:10.007928       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:10.007971       1 main.go:301] handling current node
	I1213 13:46:20.009071       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:20.009121       1 main.go:301] handling current node
	I1213 13:46:30.008603       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 13:46:30.008642       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50199bb0f2355e999cd87d325a8063909be474aea9edf7a8e719fb56e8183d8d] <==
	I1213 13:45:38.289810       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 13:45:38.289815       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 13:45:38.289851       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 13:45:38.289909       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 13:45:38.289998       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1213 13:45:38.290015       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 13:45:38.290255       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 13:45:38.290277       1 aggregator.go:166] initial CRD sync complete...
	I1213 13:45:38.290328       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 13:45:38.290340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:45:38.290347       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:45:38.311999       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1213 13:45:39.178269       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 13:45:39.191901       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:45:39.214336       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 13:45:39.234644       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:45:39.243598       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:45:39.253134       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 13:45:39.310369       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.130.236"}
	I1213 13:45:39.330369       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.117.237"}
	I1213 13:45:50.744765       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 13:45:50.794532       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:45:50.794533       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:45:50.896218       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 13:45:50.896218       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [682fe66dfbdf3e1e235c5a788a0304e2256519646f7b610b234ee76910a815c4] <==
	I1213 13:45:50.670238       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 13:45:50.898680       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1213 13:45:50.900179       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1213 13:45:50.905639       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-v5gzb"
	I1213 13:45:50.907041       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-4qxf4"
	I1213 13:45:50.911998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.103408ms"
	I1213 13:45:50.912159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.732917ms"
	I1213 13:45:50.918503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.456751ms"
	I1213 13:45:50.918606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.488µs"
	I1213 13:45:50.920183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.971343ms"
	I1213 13:45:50.920260       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.83µs"
	I1213 13:45:50.926159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.728µs"
	I1213 13:45:50.932680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.6µs"
	I1213 13:45:50.987130       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 13:45:51.070048       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 13:45:51.070081       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 13:45:54.249519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="146.14µs"
	I1213 13:45:55.254433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.182µs"
	I1213 13:45:56.261325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.538µs"
	I1213 13:45:58.269270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.254398ms"
	I1213 13:45:58.269361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.625µs"
	I1213 13:46:13.308019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.143µs"
	I1213 13:46:16.674889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.22869ms"
	I1213 13:46:16.675250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.64µs"
	I1213 13:46:21.226724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.748µs"
	
	
	==> kube-proxy [2069ed533a8ad52737dcef451b6d099922803eecb51937c753798519a94d95e8] <==
	I1213 13:45:39.583526       1 server_others.go:69] "Using iptables proxy"
	I1213 13:45:39.594309       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1213 13:45:39.615469       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:39.618638       1 server_others.go:152] "Using iptables Proxier"
	I1213 13:45:39.618670       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 13:45:39.618677       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 13:45:39.618715       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 13:45:39.619000       1 server.go:846] "Version info" version="v1.28.0"
	I1213 13:45:39.619014       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:39.623228       1 config.go:97] "Starting endpoint slice config controller"
	I1213 13:45:39.623259       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 13:45:39.623294       1 config.go:188] "Starting service config controller"
	I1213 13:45:39.623300       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 13:45:39.623386       1 config.go:315] "Starting node config controller"
	I1213 13:45:39.623412       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 13:45:39.725652       1 shared_informer.go:318] Caches are synced for node config
	I1213 13:45:39.726336       1 shared_informer.go:318] Caches are synced for service config
	I1213 13:45:39.726408       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8da5fd67633a606e436724c6c76834926bff7b7f1601133a881869ee1a6ef0e1] <==
	I1213 13:45:36.201518       1 serving.go:348] Generated self-signed cert in-memory
	I1213 13:45:38.254246       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1213 13:45:38.254269       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:38.257876       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1213 13:45:38.257905       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1213 13:45:38.257934       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:45:38.257962       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 13:45:38.257940       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:45:38.258024       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1213 13:45:38.258630       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 13:45:38.258687       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 13:45:38.358944       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 13:45:38.358948       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1213 13:45:38.358943       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 13 13:45:50 old-k8s-version-417583 kubelet[728]: I1213 13:45:50.915476     728 topology_manager.go:215] "Topology Admit Handler" podUID="12bf6f7a-a070-4d1c-a202-1b73285ad918" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-v5gzb"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.095890     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/192ed0be-4bec-4260-b59f-129791a3f292-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-4qxf4\" (UID: \"192ed0be-4bec-4260-b59f-129791a3f292\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.095951     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/12bf6f7a-a070-4d1c-a202-1b73285ad918-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-v5gzb\" (UID: \"12bf6f7a-a070-4d1c-a202-1b73285ad918\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.095990     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69sfx\" (UniqueName: \"kubernetes.io/projected/192ed0be-4bec-4260-b59f-129791a3f292-kube-api-access-69sfx\") pod \"dashboard-metrics-scraper-5f989dc9cf-4qxf4\" (UID: \"192ed0be-4bec-4260-b59f-129791a3f292\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4"
	Dec 13 13:45:51 old-k8s-version-417583 kubelet[728]: I1213 13:45:51.096073     728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmp5\" (UniqueName: \"kubernetes.io/projected/12bf6f7a-a070-4d1c-a202-1b73285ad918-kube-api-access-vdmp5\") pod \"kubernetes-dashboard-8694d4445c-v5gzb\" (UID: \"12bf6f7a-a070-4d1c-a202-1b73285ad918\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb"
	Dec 13 13:45:54 old-k8s-version-417583 kubelet[728]: I1213 13:45:54.236987     728 scope.go:117] "RemoveContainer" containerID="ef2702a441a15f099e685e6dc25c44e25d54b1ecbf4cf0b4aa3f025c550cfe4b"
	Dec 13 13:45:55 old-k8s-version-417583 kubelet[728]: I1213 13:45:55.241198     728 scope.go:117] "RemoveContainer" containerID="ef2702a441a15f099e685e6dc25c44e25d54b1ecbf4cf0b4aa3f025c550cfe4b"
	Dec 13 13:45:55 old-k8s-version-417583 kubelet[728]: I1213 13:45:55.241551     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:45:55 old-k8s-version-417583 kubelet[728]: E1213 13:45:55.241950     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:45:56 old-k8s-version-417583 kubelet[728]: I1213 13:45:56.246279     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:45:56 old-k8s-version-417583 kubelet[728]: E1213 13:45:56.246765     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:45:58 old-k8s-version-417583 kubelet[728]: I1213 13:45:58.264308     728 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v5gzb" podStartSLOduration=2.17980991 podCreationTimestamp="2025-12-13 13:45:50 +0000 UTC" firstStartedPulling="2025-12-13 13:45:51.239839056 +0000 UTC m=+16.175973670" lastFinishedPulling="2025-12-13 13:45:57.324290123 +0000 UTC m=+22.260424737" observedRunningTime="2025-12-13 13:45:58.263877352 +0000 UTC m=+23.200011973" watchObservedRunningTime="2025-12-13 13:45:58.264260977 +0000 UTC m=+23.200395596"
	Dec 13 13:46:01 old-k8s-version-417583 kubelet[728]: I1213 13:46:01.216693     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:46:01 old-k8s-version-417583 kubelet[728]: E1213 13:46:01.217157     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:46:10 old-k8s-version-417583 kubelet[728]: I1213 13:46:10.283190     728 scope.go:117] "RemoveContainer" containerID="79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: I1213 13:46:13.160967     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: I1213 13:46:13.295035     728 scope.go:117] "RemoveContainer" containerID="e164e6272aab0000e4df1e0338f58714875a546aaa0f625948ddc74dda4bbf37"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: I1213 13:46:13.295354     728 scope.go:117] "RemoveContainer" containerID="81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	Dec 13 13:46:13 old-k8s-version-417583 kubelet[728]: E1213 13:46:13.295747     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:46:21 old-k8s-version-417583 kubelet[728]: I1213 13:46:21.216642     728 scope.go:117] "RemoveContainer" containerID="81a3fc4a782303fe043c7b13634e7071cf3ae07a96a06e4dddbb01517edf8214"
	Dec 13 13:46:21 old-k8s-version-417583 kubelet[728]: E1213 13:46:21.216938     728 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-4qxf4_kubernetes-dashboard(192ed0be-4bec-4260-b59f-129791a3f292)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-4qxf4" podUID="192ed0be-4bec-4260-b59f-129791a3f292"
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:46:30 old-k8s-version-417583 systemd[1]: kubelet.service: Consumed 1.576s CPU time.
	
	
	==> kubernetes-dashboard [b333c1dd58c753a6d8c0b646480033e4d67d49277fa5bd2d1b8355fcf576cc3b] <==
	2025/12/13 13:45:57 Starting overwatch
	2025/12/13 13:45:57 Using namespace: kubernetes-dashboard
	2025/12/13 13:45:57 Using in-cluster config to connect to apiserver
	2025/12/13 13:45:57 Using secret token for csrf signing
	2025/12/13 13:45:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:45:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:45:57 Successful initial request to the apiserver, version: v1.28.0
	2025/12/13 13:45:57 Generating JWE encryption key
	2025/12/13 13:45:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:45:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:45:57 Initializing JWE encryption key from synchronized object
	2025/12/13 13:45:57 Creating in-cluster Sidecar client
	2025/12/13 13:45:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:45:57 Serving insecurely on HTTP port: 9090
	2025/12/13 13:46:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [79234c842f27555cb96ebeddf4318727c659169fff23ba629b737fffd2c85c24] <==
	I1213 13:45:39.561065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:46:09.563606       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bf316ec39e1247adf8dc8543e13846287c9472cc6e75cf6ed70278cf73884a0a] <==
	I1213 13:46:10.343608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:46:10.355489       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:10.355548       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 13:46:27.752986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:46:27.753064       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0650e3a6-aaf8-4fe6-b96a-06ebf14116a7", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-417583_61b17105-db53-4799-811c-d6672220ca76 became leader
	I1213 13:46:27.753112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-417583_61b17105-db53-4799-811c-d6672220ca76!
	I1213 13:46:27.854137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-417583_61b17105-db53-4799-811c-d6672220ca76!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417583 -n old-k8s-version-417583
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417583 -n old-k8s-version-417583: exit status 2 (343.033341ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-417583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-992258 --alsologtostderr -v=1
E1213 13:46:51.580110  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-992258 --alsologtostderr -v=1: exit status 80 (1.675159836s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-992258 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:46:50.922226  737157 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:50.922357  737157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:50.922372  737157 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:50.922379  737157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:50.922612  737157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:50.922918  737157 out.go:368] Setting JSON to false
	I1213 13:46:50.922940  737157 mustload.go:66] Loading cluster: no-preload-992258
	I1213 13:46:50.923342  737157 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:50.923815  737157 cli_runner.go:164] Run: docker container inspect no-preload-992258 --format={{.State.Status}}
	I1213 13:46:50.943076  737157 host.go:66] Checking if "no-preload-992258" exists ...
	I1213 13:46:50.943438  737157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:51.005766  737157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-13 13:46:50.994998523 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:51.006392  737157 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-992258 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 13:46:51.008184  737157 out.go:179] * Pausing node no-preload-992258 ... 
	I1213 13:46:51.009233  737157 host.go:66] Checking if "no-preload-992258" exists ...
	I1213 13:46:51.009466  737157 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:51.009508  737157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-992258
	I1213 13:46:51.028767  737157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/no-preload-992258/id_rsa Username:docker}
	I1213 13:46:51.126539  737157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:51.146540  737157 pause.go:52] kubelet running: true
	I1213 13:46:51.146610  737157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:51.327580  737157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:51.327651  737157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:51.399907  737157 cri.go:89] found id: "2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb"
	I1213 13:46:51.399943  737157 cri.go:89] found id: "801de776f76929010a0f1c9e14f42cda1b053140754f6395d039186175e1ea80"
	I1213 13:46:51.399949  737157 cri.go:89] found id: "263fb19e23abb4d9244914e355a9fe801ab84ad39c6d84f9d3d30afae3172ba2"
	I1213 13:46:51.399955  737157 cri.go:89] found id: "c671fd402d975e8ae24c777b924655550f32179268205781b348f0a491b5526f"
	I1213 13:46:51.399959  737157 cri.go:89] found id: "a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b"
	I1213 13:46:51.399966  737157 cri.go:89] found id: "45bf7a76efd360f1d23c44bb11c5c8a0f673954074b69b3130fea721533cb52c"
	I1213 13:46:51.399969  737157 cri.go:89] found id: "9562ef2afadd58588eb9f2ee3f8f0cf7f987ad9ae64f202a3c2bc83ff04864c0"
	I1213 13:46:51.399972  737157 cri.go:89] found id: "8dcbdf570cbc878b3202fdfd071d0477d8d282c28592111b59e9f42fd44842b9"
	I1213 13:46:51.399975  737157 cri.go:89] found id: "15112b75b1e5daf4777acbd4a1bc72aa48be95dbc7a9d989384f13be2d385572"
	I1213 13:46:51.399981  737157 cri.go:89] found id: "adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	I1213 13:46:51.399984  737157 cri.go:89] found id: "b5e5b43f17886e53be725c2b848298a4b1825dc9c18fa4ea1aec41a64b43407d"
	I1213 13:46:51.399986  737157 cri.go:89] found id: ""
	I1213 13:46:51.400031  737157 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:51.411847  737157 retry.go:31] will retry after 366.430795ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:51Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:51.778458  737157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:51.791764  737157 pause.go:52] kubelet running: false
	I1213 13:46:51.791844  737157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:51.941547  737157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:51.941654  737157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:52.012418  737157 cri.go:89] found id: "2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb"
	I1213 13:46:52.012442  737157 cri.go:89] found id: "801de776f76929010a0f1c9e14f42cda1b053140754f6395d039186175e1ea80"
	I1213 13:46:52.012447  737157 cri.go:89] found id: "263fb19e23abb4d9244914e355a9fe801ab84ad39c6d84f9d3d30afae3172ba2"
	I1213 13:46:52.012450  737157 cri.go:89] found id: "c671fd402d975e8ae24c777b924655550f32179268205781b348f0a491b5526f"
	I1213 13:46:52.012453  737157 cri.go:89] found id: "a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b"
	I1213 13:46:52.012458  737157 cri.go:89] found id: "45bf7a76efd360f1d23c44bb11c5c8a0f673954074b69b3130fea721533cb52c"
	I1213 13:46:52.012461  737157 cri.go:89] found id: "9562ef2afadd58588eb9f2ee3f8f0cf7f987ad9ae64f202a3c2bc83ff04864c0"
	I1213 13:46:52.012464  737157 cri.go:89] found id: "8dcbdf570cbc878b3202fdfd071d0477d8d282c28592111b59e9f42fd44842b9"
	I1213 13:46:52.012467  737157 cri.go:89] found id: "15112b75b1e5daf4777acbd4a1bc72aa48be95dbc7a9d989384f13be2d385572"
	I1213 13:46:52.012482  737157 cri.go:89] found id: "adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	I1213 13:46:52.012485  737157 cri.go:89] found id: "b5e5b43f17886e53be725c2b848298a4b1825dc9c18fa4ea1aec41a64b43407d"
	I1213 13:46:52.012487  737157 cri.go:89] found id: ""
	I1213 13:46:52.012546  737157 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:52.024419  737157 retry.go:31] will retry after 248.142269ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:52Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:52.272714  737157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:52.285569  737157 pause.go:52] kubelet running: false
	I1213 13:46:52.285637  737157 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:52.435617  737157 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:52.435703  737157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:52.510507  737157 cri.go:89] found id: "2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb"
	I1213 13:46:52.510529  737157 cri.go:89] found id: "801de776f76929010a0f1c9e14f42cda1b053140754f6395d039186175e1ea80"
	I1213 13:46:52.510535  737157 cri.go:89] found id: "263fb19e23abb4d9244914e355a9fe801ab84ad39c6d84f9d3d30afae3172ba2"
	I1213 13:46:52.510540  737157 cri.go:89] found id: "c671fd402d975e8ae24c777b924655550f32179268205781b348f0a491b5526f"
	I1213 13:46:52.510544  737157 cri.go:89] found id: "a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b"
	I1213 13:46:52.510550  737157 cri.go:89] found id: "45bf7a76efd360f1d23c44bb11c5c8a0f673954074b69b3130fea721533cb52c"
	I1213 13:46:52.510561  737157 cri.go:89] found id: "9562ef2afadd58588eb9f2ee3f8f0cf7f987ad9ae64f202a3c2bc83ff04864c0"
	I1213 13:46:52.510566  737157 cri.go:89] found id: "8dcbdf570cbc878b3202fdfd071d0477d8d282c28592111b59e9f42fd44842b9"
	I1213 13:46:52.510570  737157 cri.go:89] found id: "15112b75b1e5daf4777acbd4a1bc72aa48be95dbc7a9d989384f13be2d385572"
	I1213 13:46:52.510579  737157 cri.go:89] found id: "adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	I1213 13:46:52.510583  737157 cri.go:89] found id: "b5e5b43f17886e53be725c2b848298a4b1825dc9c18fa4ea1aec41a64b43407d"
	I1213 13:46:52.510591  737157 cri.go:89] found id: ""
	I1213 13:46:52.510648  737157 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:52.525552  737157 out.go:203] 
	W1213 13:46:52.526645  737157 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:46:52.526667  737157 out.go:285] * 
	* 
	W1213 13:46:52.531666  737157 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:46:52.532907  737157 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-992258 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-992258
helpers_test.go:244: (dbg) docker inspect no-preload-992258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7",
	        "Created": "2025-12-13T13:44:34.580077423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 723481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:45:47.196858076Z",
	            "FinishedAt": "2025-12-13T13:45:46.316395573Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/hosts",
	        "LogPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7-json.log",
	        "Name": "/no-preload-992258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-992258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-992258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7",
	                "LowerDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-992258",
	                "Source": "/var/lib/docker/volumes/no-preload-992258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-992258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-992258",
	                "name.minikube.sigs.k8s.io": "no-preload-992258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "df263f38da077406d26ae3e17b9b3ecc49db5a00c55e08d3c705d0aa51aff415",
	            "SandboxKey": "/var/run/docker/netns/df263f38da07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-992258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6b03146af25791542829a33be34e6cdd463680d204ddd7fe7766c21dca4ab829",
	                    "EndpointID": "b1524a40f3af1ec44f444eb434dd147646c32235a66c3e135a154e0ce7cba698",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ae:6e:83:67:f6:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-992258",
	                        "1ee238da5195"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258: exit status 2 (331.212636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-992258 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-992258 logs -n 25: (1.152379748s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                                    │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                                      │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:38.807259  734452 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:38.807356  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807364  734452 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:38.807368  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807581  734452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:38.808124  734452 out.go:368] Setting JSON to false
	I1213 13:46:38.809505  734452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8947,"bootTime":1765624652,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:38.809572  734452 start.go:143] virtualization: kvm guest
	I1213 13:46:38.811798  734452 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:38.813823  734452 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:38.813876  734452 notify.go:221] Checking for updates...
	I1213 13:46:38.816262  734452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:38.817585  734452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:38.818693  734452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:38.820057  734452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:38.821335  734452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:38.823198  734452 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823338  734452 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823469  734452 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:38.823581  734452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:38.861614  734452 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:38.861761  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:38.931148  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.919230241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:38.931318  734452 docker.go:319] overlay module found
	I1213 13:46:38.933289  734452 out.go:179] * Using the docker driver based on user configuration
	I1213 13:46:38.934577  734452 start.go:309] selected driver: docker
	I1213 13:46:38.934599  734452 start.go:927] validating driver "docker" against <nil>
	I1213 13:46:38.934616  734452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:38.935491  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:39.004706  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.992987781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:39.004928  734452 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 13:46:39.004966  734452 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 13:46:39.005271  734452 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:46:39.007551  734452 out.go:179] * Using Docker driver with root privileges
	I1213 13:46:39.008611  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:39.008719  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:39.008737  734452 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:46:39.008854  734452 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:39.010974  734452 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:46:39.012247  734452 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:39.013645  734452 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:39.016856  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.016895  734452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:39.016914  734452 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:39.016962  734452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:39.017009  734452 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:39.017022  734452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:46:39.017144  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:39.017168  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json: {Name:mk03f8124fe1745099f3d3cb3fe7fe5ae5e6b929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:39.044079  734452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:39.044103  734452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:39.044123  734452 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:39.044162  734452 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:39.044269  734452 start.go:364] duration metric: took 87.606µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:46:39.044501  734452 start.go:93] Provisioning new machine with config: &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:46:39.044595  734452 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:46:37.593032  723278 pod_ready.go:94] pod "coredns-7d764666f9-qfkgp" is "Ready"
	I1213 13:46:37.593060  723278 pod_ready.go:86] duration metric: took 39.506081408s for pod "coredns-7d764666f9-qfkgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.595721  723278 pod_ready.go:83] waiting for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.600005  723278 pod_ready.go:94] pod "etcd-no-preload-992258" is "Ready"
	I1213 13:46:37.600027  723278 pod_ready.go:86] duration metric: took 4.283645ms for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.602349  723278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.606335  723278 pod_ready.go:94] pod "kube-apiserver-no-preload-992258" is "Ready"
	I1213 13:46:37.606353  723278 pod_ready.go:86] duration metric: took 3.985408ms for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.608278  723278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.793439  723278 pod_ready.go:94] pod "kube-controller-manager-no-preload-992258" is "Ready"
	I1213 13:46:37.793538  723278 pod_ready.go:86] duration metric: took 185.240657ms for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.993814  723278 pod_ready.go:83] waiting for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.391287  723278 pod_ready.go:94] pod "kube-proxy-sjrzk" is "Ready"
	I1213 13:46:38.391316  723278 pod_ready.go:86] duration metric: took 397.467202ms for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.592664  723278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991819  723278 pod_ready.go:94] pod "kube-scheduler-no-preload-992258" is "Ready"
	I1213 13:46:38.991855  723278 pod_ready.go:86] duration metric: took 399.165979ms for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991870  723278 pod_ready.go:40] duration metric: took 40.907684385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:39.055074  723278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:46:39.056693  723278 out.go:179] * Done! kubectl is now configured to use "no-preload-992258" cluster and "default" namespace by default
	I1213 13:46:37.744577  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:46:37.744596  730912 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:46:37.744659  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.769735  730912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.769842  730912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:46:37.769924  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.769942  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.773997  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.806607  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.885020  730912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:37.892323  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:37.901908  730912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:37.908074  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:46:37.908095  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:46:37.924625  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.926038  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:46:37.926060  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:46:37.942015  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:46:37.942038  730912 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:46:37.961315  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:46:37.961339  730912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:46:37.979600  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:46:37.979629  730912 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:46:38.003635  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:46:38.003660  730912 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:46:38.019334  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:46:38.019359  730912 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:46:38.036465  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:46:38.036507  730912 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:46:38.053804  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:38.053835  730912 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:46:38.071650  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:39.597072  730912 node_ready.go:49] node "default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:39.597127  730912 node_ready.go:38] duration metric: took 1.695171527s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:39.597146  730912 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:39.597331  730912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:40.220696  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.328338683s)
	I1213 13:46:40.220801  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.296116857s)
	I1213 13:46:40.220919  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.149240842s)
	I1213 13:46:40.221000  730912 api_server.go:72] duration metric: took 2.51244991s to wait for apiserver process to appear ...
	I1213 13:46:40.221052  730912 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:40.221075  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.223057  730912 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-038239 addons enable metrics-server
	
	I1213 13:46:40.226524  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.226548  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:40.228246  730912 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:46:40.229402  730912 addons.go:530] duration metric: took 2.520798966s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1213 13:46:37.552331  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:39.558845  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:39.050825  734452 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:46:39.051127  734452 start.go:159] libmachine.API.Create for "newest-cni-362964" (driver="docker")
	I1213 13:46:39.051170  734452 client.go:173] LocalClient.Create starting
	I1213 13:46:39.051291  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:46:39.051338  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051367  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051431  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:46:39.051459  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051478  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051941  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:46:39.074137  734452 cli_runner.go:211] docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:46:39.074224  734452 network_create.go:284] running [docker network inspect newest-cni-362964] to gather additional debugging logs...
	I1213 13:46:39.074248  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964
	W1213 13:46:39.102273  734452 cli_runner.go:211] docker network inspect newest-cni-362964 returned with exit code 1
	I1213 13:46:39.102343  734452 network_create.go:287] error running [docker network inspect newest-cni-362964]: docker network inspect newest-cni-362964: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-362964 not found
	I1213 13:46:39.102377  734452 network_create.go:289] output of [docker network inspect newest-cni-362964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-362964 not found
	
	** /stderr **
	I1213 13:46:39.102549  734452 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:39.122483  734452 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:46:39.123444  734452 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:46:39.124137  734452 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:46:39.125173  734452 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8a30}
	I1213 13:46:39.125201  734452 network_create.go:124] attempt to create docker network newest-cni-362964 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:46:39.125260  734452 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-362964 newest-cni-362964
	I1213 13:46:39.179901  734452 network_create.go:108] docker network newest-cni-362964 192.168.76.0/24 created
	I1213 13:46:39.179928  734452 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-362964" container
	I1213 13:46:39.179979  734452 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:46:39.213973  734452 cli_runner.go:164] Run: docker volume create newest-cni-362964 --label name.minikube.sigs.k8s.io=newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:46:39.235544  734452 oci.go:103] Successfully created a docker volume newest-cni-362964
	I1213 13:46:39.235642  734452 cli_runner.go:164] Run: docker run --rm --name newest-cni-362964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --entrypoint /usr/bin/test -v newest-cni-362964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:46:39.751588  734452 oci.go:107] Successfully prepared a docker volume newest-cni-362964
	I1213 13:46:39.751676  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.751688  734452 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:46:39.751766  734452 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:46:40.721469  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.727005  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.727036  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:41.221758  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:41.227300  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1213 13:46:41.228302  730912 api_server.go:141] control plane version: v1.34.2
	I1213 13:46:41.228325  730912 api_server.go:131] duration metric: took 1.007264269s to wait for apiserver health ...
	I1213 13:46:41.228334  730912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:46:41.231822  730912 system_pods.go:59] 8 kube-system pods found
	I1213 13:46:41.231857  730912 system_pods.go:61] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.231869  730912 system_pods.go:61] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.231876  730912 system_pods.go:61] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.231882  730912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.231891  730912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.231897  730912 system_pods.go:61] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.231905  730912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.231912  730912 system_pods.go:61] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.231923  730912 system_pods.go:74] duration metric: took 3.580887ms to wait for pod list to return data ...
	I1213 13:46:41.231936  730912 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:46:41.234505  730912 default_sa.go:45] found service account: "default"
	I1213 13:46:41.234528  730912 default_sa.go:55] duration metric: took 2.585513ms for default service account to be created ...
	I1213 13:46:41.234537  730912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:46:41.237182  730912 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:41.237209  730912 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.237220  730912 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.237227  730912 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.237236  730912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.237245  730912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.237253  730912 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.237261  730912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.237271  730912 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.237279  730912 system_pods.go:126] duration metric: took 2.735704ms to wait for k8s-apps to be running ...
	I1213 13:46:41.237288  730912 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:46:41.237331  730912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:41.250597  730912 system_svc.go:56] duration metric: took 13.296933ms WaitForService to wait for kubelet
	I1213 13:46:41.250630  730912 kubeadm.go:587] duration metric: took 3.542081461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:41.250655  730912 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:46:41.254078  730912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:46:41.254103  730912 node_conditions.go:123] node cpu capacity is 8
	I1213 13:46:41.254126  730912 node_conditions.go:105] duration metric: took 3.462529ms to run NodePressure ...
	I1213 13:46:41.254141  730912 start.go:242] waiting for startup goroutines ...
	I1213 13:46:41.254155  730912 start.go:247] waiting for cluster config update ...
	I1213 13:46:41.254174  730912 start.go:256] writing updated cluster config ...
	I1213 13:46:41.254482  730912 ssh_runner.go:195] Run: rm -f paused
	I1213 13:46:41.258509  730912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:41.262286  730912 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 13:46:43.315769  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:42.051398  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:44.558674  726383 pod_ready.go:94] pod "coredns-66bc5c9577-bl59n" is "Ready"
	I1213 13:46:44.558713  726383 pod_ready.go:86] duration metric: took 32.012951382s for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.561144  726383 pod_ready.go:83] waiting for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.565899  726383 pod_ready.go:94] pod "etcd-embed-certs-973953" is "Ready"
	I1213 13:46:44.565923  726383 pod_ready.go:86] duration metric: took 4.7423ms for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.568261  726383 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.572565  726383 pod_ready.go:94] pod "kube-apiserver-embed-certs-973953" is "Ready"
	I1213 13:46:44.572592  726383 pod_ready.go:86] duration metric: took 4.304087ms for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.575031  726383 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.750453  726383 pod_ready.go:94] pod "kube-controller-manager-embed-certs-973953" is "Ready"
	I1213 13:46:44.750489  726383 pod_ready.go:86] duration metric: took 175.430643ms for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.951317  726383 pod_ready.go:83] waiting for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.350477  726383 pod_ready.go:94] pod "kube-proxy-jqcpv" is "Ready"
	I1213 13:46:45.350507  726383 pod_ready.go:86] duration metric: took 399.159038ms for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.550818  726383 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950357  726383 pod_ready.go:94] pod "kube-scheduler-embed-certs-973953" is "Ready"
	I1213 13:46:45.950385  726383 pod_ready.go:86] duration metric: took 399.541821ms for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950396  726383 pod_ready.go:40] duration metric: took 33.408030209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:46.003877  726383 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:46:46.006266  726383 out.go:179] * Done! kubectl is now configured to use "embed-certs-973953" cluster and "default" namespace by default
	I1213 13:46:43.827925  734452 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.0760512s)
	I1213 13:46:43.827966  734452 kic.go:203] duration metric: took 4.076273522s to extract preloaded images to volume ...
	W1213 13:46:43.828063  734452 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:46:43.828111  734452 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:46:43.828160  734452 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:46:43.885693  734452 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-362964 --name newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-362964 --network newest-cni-362964 --ip 192.168.76.2 --volume newest-cni-362964:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:46:44.183753  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Running}}
	I1213 13:46:44.203369  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.223422  734452 cli_runner.go:164] Run: docker exec newest-cni-362964 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:46:44.277034  734452 oci.go:144] the created container "newest-cni-362964" has a running status.
	I1213 13:46:44.277064  734452 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa...
	I1213 13:46:44.344914  734452 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:46:44.377198  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.402053  734452 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:46:44.402083  734452 kic_runner.go:114] Args: [docker exec --privileged newest-cni-362964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:46:44.478040  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.506931  734452 machine.go:94] provisionDockerMachine start ...
	I1213 13:46:44.507418  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:44.537001  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:44.537395  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:44.537427  734452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:46:44.538118  734452 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48464->127.0.0.1:33515: read: connection reset by peer
	I1213 13:46:47.689037  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.689072  734452 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:46:47.689140  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.712543  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.713000  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.713025  734452 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:46:47.873217  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.873318  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.896725  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.897081  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.897130  734452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:46:48.044203  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:46:48.044232  734452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:46:48.044289  734452 ubuntu.go:190] setting up certificates
	I1213 13:46:48.044304  734452 provision.go:84] configureAuth start
	I1213 13:46:48.044368  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.068662  734452 provision.go:143] copyHostCerts
	I1213 13:46:48.068728  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:46:48.068739  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:46:48.068879  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:46:48.069004  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:46:48.069048  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:46:48.069113  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:46:48.069294  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:46:48.069312  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:46:48.069355  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:46:48.069462  734452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:46:48.220174  734452 provision.go:177] copyRemoteCerts
	I1213 13:46:48.220240  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:46:48.220284  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.242055  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:48.348835  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:46:48.372845  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:46:48.394838  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:46:48.416450  734452 provision.go:87] duration metric: took 372.119155ms to configureAuth
	I1213 13:46:48.416488  734452 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:46:48.416718  734452 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:48.416935  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.438340  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:48.438572  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:48.438593  734452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:46:48.772615  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:46:48.772642  734452 machine.go:97] duration metric: took 4.265315999s to provisionDockerMachine
	I1213 13:46:48.772654  734452 client.go:176] duration metric: took 9.721476668s to LocalClient.Create
	I1213 13:46:48.772675  734452 start.go:167] duration metric: took 9.721549598s to libmachine.API.Create "newest-cni-362964"
	I1213 13:46:48.772685  734452 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:46:48.772700  734452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:46:48.772766  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:46:48.772846  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.796130  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	W1213 13:46:45.768717  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:48.269155  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:48.906093  734452 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:46:48.910767  734452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:46:48.910823  734452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:46:48.910839  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:46:48.910910  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:46:48.911037  734452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:46:48.911209  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:46:48.921911  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:48.947921  734452 start.go:296] duration metric: took 175.219125ms for postStartSetup
	I1213 13:46:48.948314  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.972402  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:48.972688  734452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:46:48.972732  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.995624  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.100377  734452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:46:49.106414  734452 start.go:128] duration metric: took 10.061800408s to createHost
	I1213 13:46:49.106444  734452 start.go:83] releasing machines lock for "newest-cni-362964", held for 10.062163513s
	I1213 13:46:49.106521  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:49.131359  734452 ssh_runner.go:195] Run: cat /version.json
	I1213 13:46:49.131430  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.131434  734452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:46:49.131534  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.155684  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.156118  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.345845  734452 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:49.354872  734452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:46:49.402808  734452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:46:49.408988  734452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:46:49.409066  734452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:46:49.440997  734452 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:46:49.441025  734452 start.go:496] detecting cgroup driver to use...
	I1213 13:46:49.441060  734452 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:46:49.441115  734452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:46:49.462316  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:46:49.477713  734452 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:46:49.477795  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:46:49.501648  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:46:49.526524  734452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:46:49.629504  734452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:46:49.728940  734452 docker.go:234] disabling docker service ...
	I1213 13:46:49.729008  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:46:49.751594  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:46:49.766407  734452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:46:49.855523  734452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:46:49.940562  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:46:49.953965  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:46:49.968209  734452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:46:49.968288  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.979551  734452 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:46:49.979626  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.988154  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.997026  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.005337  734452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:46:50.013019  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.021641  734452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.035024  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.043264  734452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:46:50.050409  734452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:46:50.057213  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:50.144700  734452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:46:51.023735  734452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:46:51.023835  734452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:46:51.028520  734452 start.go:564] Will wait 60s for crictl version
	I1213 13:46:51.028585  734452 ssh_runner.go:195] Run: which crictl
	I1213 13:46:51.032526  734452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:46:51.058397  734452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:46:51.058490  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.086747  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.117725  734452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:46:51.118756  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:51.138994  734452 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:46:51.143167  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.155706  734452 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:46:51.156802  734452 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:46:51.156953  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:51.157039  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.198200  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.198221  734452 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:46:51.198267  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.225683  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.225709  734452 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:46:51.225719  734452 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:46:51.225843  734452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:46:51.225940  734452 ssh_runner.go:195] Run: crio config
	I1213 13:46:51.273702  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:51.273722  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:51.273741  734452 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:46:51.273768  734452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:46:51.273951  734452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:46:51.274024  734452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:46:51.282302  734452 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:46:51.282376  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:46:51.290422  734452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:46:51.303253  734452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:46:51.318075  734452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:46:51.331214  734452 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:46:51.334976  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.345829  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:51.437080  734452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:51.461201  734452 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:46:51.461228  734452 certs.go:195] generating shared ca certs ...
	I1213 13:46:51.461258  734452 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.461456  734452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:46:51.461517  734452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:46:51.461535  734452 certs.go:257] generating profile certs ...
	I1213 13:46:51.461611  734452 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:46:51.461644  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt with IP's: []
	I1213 13:46:51.675129  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt ...
	I1213 13:46:51.675163  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt: {Name:mkfc2919111fa26d81b7191d3873ecc598936940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675356  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key ...
	I1213 13:46:51.675368  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key: {Name:mkcca4e2f19072f042ecc8cce95f891ff7bba521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675455  734452 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:46:51.675473  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 13:46:51.732537  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb ...
	I1213 13:46:51.732571  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb: {Name:mka68b1fc7336251712aa83c57233f6aaa26b56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732752  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb ...
	I1213 13:46:51.732766  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb: {Name:mk7b2188d2ac3de30be4a0ecf05771755b89586c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732898  734452 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt
	I1213 13:46:51.733002  734452 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key
	I1213 13:46:51.733072  734452 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:46:51.733091  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt with IP's: []
	I1213 13:46:51.768844  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt ...
	I1213 13:46:51.768876  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt: {Name:mk54ca537df717e699f15967f0763bc1a365ba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769051  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key ...
	I1213 13:46:51.769066  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key: {Name:mkc6731d5f061dd55c086b1529645fdd7e056a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769254  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:46:51.769294  734452 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:46:51.769306  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:46:51.769336  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:46:51.769363  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:46:51.769392  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:46:51.769438  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:51.770096  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:46:51.789179  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:46:51.807957  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:46:51.829246  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:46:51.849816  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:46:51.867382  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:46:51.884431  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:46:51.901499  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:46:51.918590  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:46:51.938587  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:46:51.956885  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:46:51.976711  734452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:46:51.990451  734452 ssh_runner.go:195] Run: openssl version
	I1213 13:46:51.996876  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.004771  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:46:52.013327  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017188  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017246  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.052182  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:46:52.060156  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:46:52.067555  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.074980  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:46:52.083293  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087008  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087060  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.121292  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.129202  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.136878  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.144894  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:46:52.152936  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156906  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156974  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.192626  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:46:52.200484  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:46:52.207749  734452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:46:52.211283  734452 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:46:52.211338  734452 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:52.211418  734452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:46:52.211486  734452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:46:52.238989  734452 cri.go:89] found id: ""
	I1213 13:46:52.239071  734452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:46:52.248678  734452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:46:52.257209  734452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:46:52.257267  734452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:46:52.265205  734452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:46:52.265226  734452 kubeadm.go:158] found existing configuration files:
	
	I1213 13:46:52.265280  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:46:52.273379  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:46:52.273433  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:46:52.280768  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:46:52.288560  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:46:52.288610  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:46:52.296093  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.303964  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:46:52.304023  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.311559  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:46:52.320197  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:46:52.320257  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:46:52.334065  734452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:46:52.371455  734452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 13:46:52.371571  734452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:46:52.442098  734452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:46:52.442200  734452 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:46:52.442255  734452 kubeadm.go:319] OS: Linux
	I1213 13:46:52.442323  734452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:46:52.442390  734452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:46:52.442455  734452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:46:52.442512  734452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:46:52.442578  734452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:46:52.442697  734452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:46:52.442826  734452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:46:52.442969  734452 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:46:52.508064  734452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:46:52.508249  734452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:46:52.508406  734452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:46:52.516288  734452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 13 13:46:18 no-preload-992258 crio[570]: time="2025-12-13T13:46:18.181992531Z" level=info msg="Started container" PID=1764 containerID=f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper id=6acd9911-334d-441a-ac48-dd8fd737a26d name=/runtime.v1.RuntimeService/StartContainer sandboxID=85dc96530c6c5b2e7edd0695e2a10ebe79f1c429439d7a2607d250579491568d
	Dec 13 13:46:19 no-preload-992258 crio[570]: time="2025-12-13T13:46:19.215170726Z" level=info msg="Removing container: 01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821" id=a4fe5e0c-a5bf-4641-a48e-5f934e6d7117 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:19 no-preload-992258 crio[570]: time="2025-12-13T13:46:19.228855754Z" level=info msg="Removed container 01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=a4fe5e0c-a5bf-4641-a48e-5f934e6d7117 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.239041845Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d66d4761-f9f2-4db4-a817-78f7f6c14991 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.240060035Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=741e86a0-c57c-40a4-b298-312a7bb67559 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.241149172Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f5487051-0431-4c86-96dc-cecb8795180d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.241284103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.246265844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.246457393Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/18ef10547fb4ad8c95ff1ed2a4e31b156ad847e94010d9a1b6b90190825e25ba/merged/etc/passwd: no such file or directory"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.246487135Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/18ef10547fb4ad8c95ff1ed2a4e31b156ad847e94010d9a1b6b90190825e25ba/merged/etc/group: no such file or directory"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.247204886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.273484748Z" level=info msg="Created container 2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb: kube-system/storage-provisioner/storage-provisioner" id=f5487051-0431-4c86-96dc-cecb8795180d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.274112101Z" level=info msg="Starting container: 2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb" id=c69ddfb9-4df8-42bb-aced-31e4a3695ec5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.276096983Z" level=info msg="Started container" PID=1782 containerID=2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb description=kube-system/storage-provisioner/storage-provisioner id=c69ddfb9-4df8-42bb-aced-31e4a3695ec5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05567250c80e881eca8da917698f8331b9417dd48cc02db837117e029e118dfb
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.121865492Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3aa28506-d521-4478-bfac-ccb6571e23b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.199305041Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aba35828-20ef-4d1b-ac2d-af7a788de35c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.200535265Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=590c31b4-0504-4667-9721-d2bd64444308 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.20066377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.229465029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.230141613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.362321651Z" level=info msg="Created container adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=590c31b4-0504-4667-9721-d2bd64444308 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.363017477Z" level=info msg="Starting container: adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a" id=0cd70670-f1b0-4be8-89f9-6c4de260d414 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.365507661Z" level=info msg="Started container" PID=1818 containerID=adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper id=0cd70670-f1b0-4be8-89f9-6c4de260d414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=85dc96530c6c5b2e7edd0695e2a10ebe79f1c429439d7a2607d250579491568d
	Dec 13 13:46:43 no-preload-992258 crio[570]: time="2025-12-13T13:46:43.283806114Z" level=info msg="Removing container: f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9" id=824b65d6-8ee2-4f6c-9724-e576033f3f16 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:43 no-preload-992258 crio[570]: time="2025-12-13T13:46:43.810124076Z" level=info msg="Removed container f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=824b65d6-8ee2-4f6c-9724-e576033f3f16 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	adb9dcbdc1b93       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   85dc96530c6c5       dashboard-metrics-scraper-867fb5f87b-sj6kp   kubernetes-dashboard
	2d56b5a331f4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   05567250c80e8       storage-provisioner                          kube-system
	b5e5b43f17886       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   b7674a0570adb       kubernetes-dashboard-b84665fb8-dkjpg         kubernetes-dashboard
	801de776f7692       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   b90cf7b42ff16       coredns-7d764666f9-qfkgp                     kube-system
	d92802dbdc886       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   c36f5cf0a3dd5       busybox                                      default
	263fb19e23abb       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           56 seconds ago      Running             kube-proxy                  0                   d284825762c56       kube-proxy-sjrzk                             kube-system
	c671fd402d975       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   a1bca9a4dfbfc       kindnet-2n8ks                                kube-system
	a78a5599787f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   05567250c80e8       storage-provisioner                          kube-system
	45bf7a76efd36       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           58 seconds ago      Running             kube-controller-manager     0                   23ab819fffeee       kube-controller-manager-no-preload-992258    kube-system
	9562ef2afadd5       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           58 seconds ago      Running             kube-apiserver              0                   8153a0e592018       kube-apiserver-no-preload-992258             kube-system
	8dcbdf570cbc8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   4fe9cf3b72557       etcd-no-preload-992258                       kube-system
	15112b75b1e5d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           58 seconds ago      Running             kube-scheduler              0                   6d759f3cb1f2c       kube-scheduler-no-preload-992258             kube-system
	
	
	==> coredns [801de776f76929010a0f1c9e14f42cda1b053140754f6395d039186175e1ea80] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45155 - 8378 "HINFO IN 5065514542116602580.4737308741301018469. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.105160759s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-992258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-992258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=no-preload-992258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_44_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:44:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-992258
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:45:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-992258
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                a54834e7-7b06-490e-bc63-9fe908fc9136
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-qfkgp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-992258                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-2n8ks                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-992258              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-992258     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-sjrzk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-992258              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-sj6kp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-dkjpg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-992258 event: Registered Node no-preload-992258 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-992258 event: Registered Node no-preload-992258 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [8dcbdf570cbc878b3202fdfd071d0477d8d282c28592111b59e9f42fd44842b9] <==
	{"level":"warn","ts":"2025-12-13T13:45:56.057975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.069082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.076464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.084599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.093091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.101837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.109931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.120280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.128896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.137920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.146042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.157290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.162134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.170706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.178860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.186959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.194832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.203345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.211194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.219802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.235935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.244247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.252705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.260435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.324876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53396","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:46:53 up  2:29,  0 user,  load average: 5.87, 4.34, 2.75
	Linux no-preload-992258 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c671fd402d975e8ae24c777b924655550f32179268205781b348f0a491b5526f] <==
	I1213 13:45:57.654553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:57.654848       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 13:45:57.655053       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:57.655074       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:57.655105       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:57.951319       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:57.951370       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:57.951389       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:57.952888       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:58.351578       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:58.351605       1 metrics.go:72] Registering metrics
	I1213 13:45:58.351679       1 controller.go:711] "Syncing nftables rules"
	I1213 13:46:07.858184       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:07.858233       1 main.go:301] handling current node
	I1213 13:46:17.860963       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:17.861001       1 main.go:301] handling current node
	I1213 13:46:27.857359       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:27.857401       1 main.go:301] handling current node
	I1213 13:46:37.857933       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:37.857997       1 main.go:301] handling current node
	I1213 13:46:47.857962       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:47.858043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9562ef2afadd58588eb9f2ee3f8f0cf7f987ad9ae64f202a3c2bc83ff04864c0] <==
	I1213 13:45:56.919385       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 13:45:56.919474       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:45:56.919484       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:45:56.919570       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 13:45:56.919619       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 13:45:56.921035       1 aggregator.go:187] initial CRD sync complete...
	I1213 13:45:56.921049       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:45:56.921056       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:45:56.921062       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:45:56.926822       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1213 13:45:56.927298       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 13:45:56.942663       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:45:56.944733       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 13:45:56.959383       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:57.153035       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:45:57.289920       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:45:57.319190       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:45:57.341576       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:45:57.348535       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:45:57.387942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.217.154"}
	I1213 13:45:57.396545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.182.84"}
	I1213 13:45:57.811636       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 13:46:00.480766       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:46:00.681546       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:46:00.776986       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [45bf7a76efd360f1d23c44bb11c5c8a0f673954074b69b3130fea721533cb52c] <==
	I1213 13:46:00.082301       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082302       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082509       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082580       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082658       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082910       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082957       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.083963       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.083976       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.084010       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.087628       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.087648       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:46:00.088072       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089859       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089878       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089916       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089938       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089955       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089989       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.091379       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.094938       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.188202       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.189297       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.189315       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:46:00.189320       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [263fb19e23abb4d9244914e355a9fe801ab84ad39c6d84f9d3d30afae3172ba2] <==
	I1213 13:45:57.538531       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:45:57.604283       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:45:57.704565       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:57.704608       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 13:45:57.704678       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:45:57.722769       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:57.722851       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:45:57.727683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:45:57.728068       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:45:57.728089       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:57.729680       1 config.go:309] "Starting node config controller"
	I1213 13:45:57.729702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:45:57.729752       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:45:57.729759       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:45:57.729805       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:45:57.730365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:45:57.730467       1 config.go:200] "Starting service config controller"
	I1213 13:45:57.730511       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:45:57.829903       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:45:57.829915       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:45:57.830818       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:45:57.830849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [15112b75b1e5daf4777acbd4a1bc72aa48be95dbc7a9d989384f13be2d385572] <==
	I1213 13:45:54.841227       1 serving.go:386] Generated self-signed cert in-memory
	W1213 13:45:56.830485       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 13:45:56.830538       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:45:56.830550       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 13:45:56.830558       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 13:45:56.895992       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 13:45:56.896105       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:56.899346       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:45:56.899590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:45:56.900826       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:45:56.900303       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:45:57.001186       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 13:46:18 no-preload-992258 kubelet[721]: I1213 13:46:18.120827     721 scope.go:122] "RemoveContainer" containerID="01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821"
	Dec 13 13:46:18 no-preload-992258 kubelet[721]: E1213 13:46:18.206902     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:18 no-preload-992258 kubelet[721]: I1213 13:46:18.218813     721 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podStartSLOduration=1.099450118 podStartE2EDuration="18.218794458s" podCreationTimestamp="2025-12-13 13:46:00 +0000 UTC" firstStartedPulling="2025-12-13 13:46:01.004922072 +0000 UTC m=+6.972561678" lastFinishedPulling="2025-12-13 13:46:18.124266398 +0000 UTC m=+24.091906018" observedRunningTime="2025-12-13 13:46:18.21871469 +0000 UTC m=+24.186354315" watchObservedRunningTime="2025-12-13 13:46:18.218794458 +0000 UTC m=+24.186434087"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: I1213 13:46:19.212928     721 scope.go:122] "RemoveContainer" containerID="01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: E1213 13:46:19.213896     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: I1213 13:46:19.213940     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: E1213 13:46:19.214141     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:26 no-preload-992258 kubelet[721]: E1213 13:46:26.870467     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:26 no-preload-992258 kubelet[721]: I1213 13:46:26.870503     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:26 no-preload-992258 kubelet[721]: E1213 13:46:26.870718     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:28 no-preload-992258 kubelet[721]: I1213 13:46:28.238553     721 scope.go:122] "RemoveContainer" containerID="a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b"
	Dec 13 13:46:37 no-preload-992258 kubelet[721]: E1213 13:46:37.323686     721 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-qfkgp" containerName="coredns"
	Dec 13 13:46:42 no-preload-992258 kubelet[721]: E1213 13:46:42.121337     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:42 no-preload-992258 kubelet[721]: I1213 13:46:42.121376     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: I1213 13:46:43.282505     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: E1213 13:46:43.282723     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: I1213 13:46:43.282754     721 scope.go:122] "RemoveContainer" containerID="adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: E1213 13:46:43.282995     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:46 no-preload-992258 kubelet[721]: E1213 13:46:46.871100     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:46 no-preload-992258 kubelet[721]: I1213 13:46:46.871145     721 scope.go:122] "RemoveContainer" containerID="adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	Dec 13 13:46:46 no-preload-992258 kubelet[721]: E1213 13:46:46.871323     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:51 no-preload-992258 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:46:51 no-preload-992258 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:46:51 no-preload-992258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:46:51 no-preload-992258 systemd[1]: kubelet.service: Consumed 1.797s CPU time.
	
	
	==> kubernetes-dashboard [b5e5b43f17886e53be725c2b848298a4b1825dc9c18fa4ea1aec41a64b43407d] <==
	2025/12/13 13:46:04 Using namespace: kubernetes-dashboard
	2025/12/13 13:46:04 Using in-cluster config to connect to apiserver
	2025/12/13 13:46:04 Using secret token for csrf signing
	2025/12/13 13:46:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:46:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:46:04 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/13 13:46:04 Generating JWE encryption key
	2025/12/13 13:46:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:46:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:46:04 Initializing JWE encryption key from synchronized object
	2025/12/13 13:46:04 Creating in-cluster Sidecar client
	2025/12/13 13:46:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:04 Serving insecurely on HTTP port: 9090
	2025/12/13 13:46:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:04 Starting overwatch
	
	
	==> storage-provisioner [2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb] <==
	I1213 13:46:28.288520       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:46:28.296367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:28.296412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:46:28.298261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:31.753131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:36.013721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:39.614917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:42.674385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:45.697342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:45.702471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:46:45.702637       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:46:45.702864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-992258_bb3e5d11-8254-4306-b990-23cb0fb499e0!
	I1213 13:46:45.702822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8639f770-ba62-4b85-93df-6f4c8eca72ae", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-992258_bb3e5d11-8254-4306-b990-23cb0fb499e0 became leader
	W1213 13:46:45.705577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:45.709669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:46:45.803388       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-992258_bb3e5d11-8254-4306-b990-23cb0fb499e0!
	W1213 13:46:47.713204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:47.717885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:49.721328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:49.725811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:51.728426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:51.732292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:53.735071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:53.740470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b] <==
	I1213 13:45:57.494896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:46:27.497259       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-992258 -n no-preload-992258
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-992258 -n no-preload-992258: exit status 2 (330.439391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-992258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-992258
helpers_test.go:244: (dbg) docker inspect no-preload-992258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7",
	        "Created": "2025-12-13T13:44:34.580077423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 723481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:45:47.196858076Z",
	            "FinishedAt": "2025-12-13T13:45:46.316395573Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/hosts",
	        "LogPath": "/var/lib/docker/containers/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7/1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7-json.log",
	        "Name": "/no-preload-992258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-992258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-992258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ee238da5195f26130843a1fef5cc5d89d2b40177ad305da75ce0a8298d9c5a7",
	                "LowerDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e62da2e21090d931262b0bfdee947efa3f7e7addf083b74e9377f9573a972c68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-992258",
	                "Source": "/var/lib/docker/volumes/no-preload-992258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-992258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-992258",
	                "name.minikube.sigs.k8s.io": "no-preload-992258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "df263f38da077406d26ae3e17b9b3ecc49db5a00c55e08d3c705d0aa51aff415",
	            "SandboxKey": "/var/run/docker/netns/df263f38da07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-992258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6b03146af25791542829a33be34e6cdd463680d204ddd7fe7766c21dca4ab829",
	                    "EndpointID": "b1524a40f3af1ec44f444eb434dd147646c32235a66c3e135a154e0ce7cba698",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ae:6e:83:67:f6:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-992258",
	                        "1ee238da5195"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258: exit status 2 (364.403224ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-992258 logs -n 25
E1213 13:46:55.636627  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:55.643446  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:55.654886  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:55.676393  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:55.718244  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:55.799664  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:46:55.961032  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-992258 logs -n 25: (1.194992522s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ ssh     │ -p bridge-884214 sudo crio config                                                                                                                                                                                                                    │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p bridge-884214                                                                                                                                                                                                                                     │ bridge-884214                │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ delete  │ -p disable-driver-mounts-031848                                                                                                                                                                                                                      │ disable-driver-mounts-031848 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:38.807259  734452 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:38.807356  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807364  734452 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:38.807368  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807581  734452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:38.808124  734452 out.go:368] Setting JSON to false
	I1213 13:46:38.809505  734452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8947,"bootTime":1765624652,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:38.809572  734452 start.go:143] virtualization: kvm guest
	I1213 13:46:38.811798  734452 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:38.813823  734452 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:38.813876  734452 notify.go:221] Checking for updates...
	I1213 13:46:38.816262  734452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:38.817585  734452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:38.818693  734452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:38.820057  734452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:38.821335  734452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:38.823198  734452 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823338  734452 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823469  734452 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:38.823581  734452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:38.861614  734452 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:38.861761  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:38.931148  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.919230241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:38.931318  734452 docker.go:319] overlay module found
	I1213 13:46:38.933289  734452 out.go:179] * Using the docker driver based on user configuration
	I1213 13:46:38.934577  734452 start.go:309] selected driver: docker
	I1213 13:46:38.934599  734452 start.go:927] validating driver "docker" against <nil>
	I1213 13:46:38.934616  734452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:38.935491  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:39.004706  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.992987781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:39.004928  734452 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 13:46:39.004966  734452 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 13:46:39.005271  734452 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:46:39.007551  734452 out.go:179] * Using Docker driver with root privileges
	I1213 13:46:39.008611  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:39.008719  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:39.008737  734452 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:46:39.008854  734452 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:39.010974  734452 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:46:39.012247  734452 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:39.013645  734452 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:39.016856  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.016895  734452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:39.016914  734452 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:39.016962  734452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:39.017009  734452 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:39.017022  734452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:46:39.017144  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:39.017168  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json: {Name:mk03f8124fe1745099f3d3cb3fe7fe5ae5e6b929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:39.044079  734452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:39.044103  734452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:39.044123  734452 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:39.044162  734452 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:39.044269  734452 start.go:364] duration metric: took 87.606µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:46:39.044501  734452 start.go:93] Provisioning new machine with config: &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:46:39.044595  734452 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:46:37.593032  723278 pod_ready.go:94] pod "coredns-7d764666f9-qfkgp" is "Ready"
	I1213 13:46:37.593060  723278 pod_ready.go:86] duration metric: took 39.506081408s for pod "coredns-7d764666f9-qfkgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.595721  723278 pod_ready.go:83] waiting for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.600005  723278 pod_ready.go:94] pod "etcd-no-preload-992258" is "Ready"
	I1213 13:46:37.600027  723278 pod_ready.go:86] duration metric: took 4.283645ms for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.602349  723278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.606335  723278 pod_ready.go:94] pod "kube-apiserver-no-preload-992258" is "Ready"
	I1213 13:46:37.606353  723278 pod_ready.go:86] duration metric: took 3.985408ms for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.608278  723278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.793439  723278 pod_ready.go:94] pod "kube-controller-manager-no-preload-992258" is "Ready"
	I1213 13:46:37.793538  723278 pod_ready.go:86] duration metric: took 185.240657ms for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.993814  723278 pod_ready.go:83] waiting for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.391287  723278 pod_ready.go:94] pod "kube-proxy-sjrzk" is "Ready"
	I1213 13:46:38.391316  723278 pod_ready.go:86] duration metric: took 397.467202ms for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.592664  723278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991819  723278 pod_ready.go:94] pod "kube-scheduler-no-preload-992258" is "Ready"
	I1213 13:46:38.991855  723278 pod_ready.go:86] duration metric: took 399.165979ms for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991870  723278 pod_ready.go:40] duration metric: took 40.907684385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:39.055074  723278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:46:39.056693  723278 out.go:179] * Done! kubectl is now configured to use "no-preload-992258" cluster and "default" namespace by default
	I1213 13:46:37.744577  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:46:37.744596  730912 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:46:37.744659  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.769735  730912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.769842  730912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:46:37.769924  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.769942  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.773997  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.806607  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.885020  730912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:37.892323  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:37.901908  730912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:37.908074  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:46:37.908095  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:46:37.924625  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.926038  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:46:37.926060  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:46:37.942015  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:46:37.942038  730912 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:46:37.961315  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:46:37.961339  730912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:46:37.979600  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:46:37.979629  730912 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:46:38.003635  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:46:38.003660  730912 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:46:38.019334  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:46:38.019359  730912 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:46:38.036465  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:46:38.036507  730912 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:46:38.053804  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:38.053835  730912 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:46:38.071650  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:39.597072  730912 node_ready.go:49] node "default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:39.597127  730912 node_ready.go:38] duration metric: took 1.695171527s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:39.597146  730912 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:39.597331  730912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:40.220696  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.328338683s)
	I1213 13:46:40.220801  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.296116857s)
	I1213 13:46:40.220919  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.149240842s)
	I1213 13:46:40.221000  730912 api_server.go:72] duration metric: took 2.51244991s to wait for apiserver process to appear ...
	I1213 13:46:40.221052  730912 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:40.221075  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.223057  730912 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-038239 addons enable metrics-server
	
	I1213 13:46:40.226524  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.226548  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:40.228246  730912 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:46:40.229402  730912 addons.go:530] duration metric: took 2.520798966s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1213 13:46:37.552331  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:39.558845  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:39.050825  734452 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:46:39.051127  734452 start.go:159] libmachine.API.Create for "newest-cni-362964" (driver="docker")
	I1213 13:46:39.051170  734452 client.go:173] LocalClient.Create starting
	I1213 13:46:39.051291  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:46:39.051338  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051367  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051431  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:46:39.051459  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051478  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051941  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:46:39.074137  734452 cli_runner.go:211] docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:46:39.074224  734452 network_create.go:284] running [docker network inspect newest-cni-362964] to gather additional debugging logs...
	I1213 13:46:39.074248  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964
	W1213 13:46:39.102273  734452 cli_runner.go:211] docker network inspect newest-cni-362964 returned with exit code 1
	I1213 13:46:39.102343  734452 network_create.go:287] error running [docker network inspect newest-cni-362964]: docker network inspect newest-cni-362964: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-362964 not found
	I1213 13:46:39.102377  734452 network_create.go:289] output of [docker network inspect newest-cni-362964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-362964 not found
	
	** /stderr **
	I1213 13:46:39.102549  734452 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:39.122483  734452 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:46:39.123444  734452 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:46:39.124137  734452 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:46:39.125173  734452 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8a30}
	I1213 13:46:39.125201  734452 network_create.go:124] attempt to create docker network newest-cni-362964 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:46:39.125260  734452 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-362964 newest-cni-362964
	I1213 13:46:39.179901  734452 network_create.go:108] docker network newest-cni-362964 192.168.76.0/24 created
	I1213 13:46:39.179928  734452 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-362964" container
	I1213 13:46:39.179979  734452 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:46:39.213973  734452 cli_runner.go:164] Run: docker volume create newest-cni-362964 --label name.minikube.sigs.k8s.io=newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:46:39.235544  734452 oci.go:103] Successfully created a docker volume newest-cni-362964
	I1213 13:46:39.235642  734452 cli_runner.go:164] Run: docker run --rm --name newest-cni-362964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --entrypoint /usr/bin/test -v newest-cni-362964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:46:39.751588  734452 oci.go:107] Successfully prepared a docker volume newest-cni-362964
	I1213 13:46:39.751676  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.751688  734452 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:46:39.751766  734452 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:46:40.721469  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.727005  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.727036  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:41.221758  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:41.227300  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1213 13:46:41.228302  730912 api_server.go:141] control plane version: v1.34.2
	I1213 13:46:41.228325  730912 api_server.go:131] duration metric: took 1.007264269s to wait for apiserver health ...
	I1213 13:46:41.228334  730912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:46:41.231822  730912 system_pods.go:59] 8 kube-system pods found
	I1213 13:46:41.231857  730912 system_pods.go:61] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.231869  730912 system_pods.go:61] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.231876  730912 system_pods.go:61] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.231882  730912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.231891  730912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.231897  730912 system_pods.go:61] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.231905  730912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.231912  730912 system_pods.go:61] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.231923  730912 system_pods.go:74] duration metric: took 3.580887ms to wait for pod list to return data ...
	I1213 13:46:41.231936  730912 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:46:41.234505  730912 default_sa.go:45] found service account: "default"
	I1213 13:46:41.234528  730912 default_sa.go:55] duration metric: took 2.585513ms for default service account to be created ...
	I1213 13:46:41.234537  730912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:46:41.237182  730912 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:41.237209  730912 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.237220  730912 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.237227  730912 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.237236  730912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.237245  730912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.237253  730912 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.237261  730912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.237271  730912 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.237279  730912 system_pods.go:126] duration metric: took 2.735704ms to wait for k8s-apps to be running ...
	I1213 13:46:41.237288  730912 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:46:41.237331  730912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:41.250597  730912 system_svc.go:56] duration metric: took 13.296933ms WaitForService to wait for kubelet
	I1213 13:46:41.250630  730912 kubeadm.go:587] duration metric: took 3.542081461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:41.250655  730912 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:46:41.254078  730912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:46:41.254103  730912 node_conditions.go:123] node cpu capacity is 8
	I1213 13:46:41.254126  730912 node_conditions.go:105] duration metric: took 3.462529ms to run NodePressure ...
	I1213 13:46:41.254141  730912 start.go:242] waiting for startup goroutines ...
	I1213 13:46:41.254155  730912 start.go:247] waiting for cluster config update ...
	I1213 13:46:41.254174  730912 start.go:256] writing updated cluster config ...
	I1213 13:46:41.254482  730912 ssh_runner.go:195] Run: rm -f paused
	I1213 13:46:41.258509  730912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:41.262286  730912 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 13:46:43.315769  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:42.051398  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:44.558674  726383 pod_ready.go:94] pod "coredns-66bc5c9577-bl59n" is "Ready"
	I1213 13:46:44.558713  726383 pod_ready.go:86] duration metric: took 32.012951382s for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.561144  726383 pod_ready.go:83] waiting for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.565899  726383 pod_ready.go:94] pod "etcd-embed-certs-973953" is "Ready"
	I1213 13:46:44.565923  726383 pod_ready.go:86] duration metric: took 4.7423ms for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.568261  726383 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.572565  726383 pod_ready.go:94] pod "kube-apiserver-embed-certs-973953" is "Ready"
	I1213 13:46:44.572592  726383 pod_ready.go:86] duration metric: took 4.304087ms for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.575031  726383 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.750453  726383 pod_ready.go:94] pod "kube-controller-manager-embed-certs-973953" is "Ready"
	I1213 13:46:44.750489  726383 pod_ready.go:86] duration metric: took 175.430643ms for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.951317  726383 pod_ready.go:83] waiting for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.350477  726383 pod_ready.go:94] pod "kube-proxy-jqcpv" is "Ready"
	I1213 13:46:45.350507  726383 pod_ready.go:86] duration metric: took 399.159038ms for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.550818  726383 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950357  726383 pod_ready.go:94] pod "kube-scheduler-embed-certs-973953" is "Ready"
	I1213 13:46:45.950385  726383 pod_ready.go:86] duration metric: took 399.541821ms for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950396  726383 pod_ready.go:40] duration metric: took 33.408030209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:46.003877  726383 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:46:46.006266  726383 out.go:179] * Done! kubectl is now configured to use "embed-certs-973953" cluster and "default" namespace by default
	I1213 13:46:43.827925  734452 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.0760512s)
	I1213 13:46:43.827966  734452 kic.go:203] duration metric: took 4.076273522s to extract preloaded images to volume ...
	W1213 13:46:43.828063  734452 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:46:43.828111  734452 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:46:43.828160  734452 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:46:43.885693  734452 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-362964 --name newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-362964 --network newest-cni-362964 --ip 192.168.76.2 --volume newest-cni-362964:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:46:44.183753  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Running}}
	I1213 13:46:44.203369  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.223422  734452 cli_runner.go:164] Run: docker exec newest-cni-362964 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:46:44.277034  734452 oci.go:144] the created container "newest-cni-362964" has a running status.
	I1213 13:46:44.277064  734452 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa...
	I1213 13:46:44.344914  734452 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:46:44.377198  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.402053  734452 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:46:44.402083  734452 kic_runner.go:114] Args: [docker exec --privileged newest-cni-362964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:46:44.478040  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.506931  734452 machine.go:94] provisionDockerMachine start ...
	I1213 13:46:44.507418  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:44.537001  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:44.537395  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:44.537427  734452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:46:44.538118  734452 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48464->127.0.0.1:33515: read: connection reset by peer
	I1213 13:46:47.689037  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.689072  734452 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:46:47.689140  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.712543  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.713000  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.713025  734452 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:46:47.873217  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.873318  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.896725  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.897081  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.897130  734452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:46:48.044203  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:46:48.044232  734452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:46:48.044289  734452 ubuntu.go:190] setting up certificates
	I1213 13:46:48.044304  734452 provision.go:84] configureAuth start
	I1213 13:46:48.044368  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.068662  734452 provision.go:143] copyHostCerts
	I1213 13:46:48.068728  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:46:48.068739  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:46:48.068879  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:46:48.069004  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:46:48.069048  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:46:48.069113  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:46:48.069294  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:46:48.069312  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:46:48.069355  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:46:48.069462  734452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:46:48.220174  734452 provision.go:177] copyRemoteCerts
	I1213 13:46:48.220240  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:46:48.220284  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.242055  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:48.348835  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:46:48.372845  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:46:48.394838  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:46:48.416450  734452 provision.go:87] duration metric: took 372.119155ms to configureAuth
	I1213 13:46:48.416488  734452 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:46:48.416718  734452 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:48.416935  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.438340  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:48.438572  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:48.438593  734452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:46:48.772615  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:46:48.772642  734452 machine.go:97] duration metric: took 4.265315999s to provisionDockerMachine
	I1213 13:46:48.772654  734452 client.go:176] duration metric: took 9.721476668s to LocalClient.Create
	I1213 13:46:48.772675  734452 start.go:167] duration metric: took 9.721549598s to libmachine.API.Create "newest-cni-362964"
	I1213 13:46:48.772685  734452 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:46:48.772700  734452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:46:48.772766  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:46:48.772846  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.796130  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	W1213 13:46:45.768717  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:48.269155  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:48.906093  734452 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:46:48.910767  734452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:46:48.910823  734452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:46:48.910839  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:46:48.910910  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:46:48.911037  734452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:46:48.911209  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:46:48.921911  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:48.947921  734452 start.go:296] duration metric: took 175.219125ms for postStartSetup
	I1213 13:46:48.948314  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.972402  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:48.972688  734452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:46:48.972732  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.995624  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.100377  734452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:46:49.106414  734452 start.go:128] duration metric: took 10.061800408s to createHost
	I1213 13:46:49.106444  734452 start.go:83] releasing machines lock for "newest-cni-362964", held for 10.062163513s
	I1213 13:46:49.106521  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:49.131359  734452 ssh_runner.go:195] Run: cat /version.json
	I1213 13:46:49.131430  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.131434  734452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:46:49.131534  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.155684  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.156118  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.345845  734452 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:49.354872  734452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:46:49.402808  734452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:46:49.408988  734452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:46:49.409066  734452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:46:49.440997  734452 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:46:49.441025  734452 start.go:496] detecting cgroup driver to use...
	I1213 13:46:49.441060  734452 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:46:49.441115  734452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:46:49.462316  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:46:49.477713  734452 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:46:49.477795  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:46:49.501648  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:46:49.526524  734452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:46:49.629504  734452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:46:49.728940  734452 docker.go:234] disabling docker service ...
	I1213 13:46:49.729008  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:46:49.751594  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:46:49.766407  734452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:46:49.855523  734452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:46:49.940562  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:46:49.953965  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:46:49.968209  734452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:46:49.968288  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.979551  734452 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:46:49.979626  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.988154  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.997026  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.005337  734452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:46:50.013019  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.021641  734452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.035024  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.043264  734452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:46:50.050409  734452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:46:50.057213  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:50.144700  734452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:46:51.023735  734452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:46:51.023835  734452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:46:51.028520  734452 start.go:564] Will wait 60s for crictl version
	I1213 13:46:51.028585  734452 ssh_runner.go:195] Run: which crictl
	I1213 13:46:51.032526  734452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:46:51.058397  734452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:46:51.058490  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.086747  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.117725  734452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:46:51.118756  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:51.138994  734452 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:46:51.143167  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.155706  734452 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:46:51.156802  734452 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:46:51.156953  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:51.157039  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.198200  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.198221  734452 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:46:51.198267  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.225683  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.225709  734452 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:46:51.225719  734452 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:46:51.225843  734452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:46:51.225940  734452 ssh_runner.go:195] Run: crio config
	I1213 13:46:51.273702  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:51.273722  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:51.273741  734452 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:46:51.273768  734452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:46:51.273951  734452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:46:51.274024  734452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:46:51.282302  734452 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:46:51.282376  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:46:51.290422  734452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:46:51.303253  734452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:46:51.318075  734452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:46:51.331214  734452 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:46:51.334976  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.345829  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:51.437080  734452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:51.461201  734452 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:46:51.461228  734452 certs.go:195] generating shared ca certs ...
	I1213 13:46:51.461258  734452 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.461456  734452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:46:51.461517  734452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:46:51.461535  734452 certs.go:257] generating profile certs ...
	I1213 13:46:51.461611  734452 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:46:51.461644  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt with IP's: []
	I1213 13:46:51.675129  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt ...
	I1213 13:46:51.675163  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt: {Name:mkfc2919111fa26d81b7191d3873ecc598936940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675356  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key ...
	I1213 13:46:51.675368  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key: {Name:mkcca4e2f19072f042ecc8cce95f891ff7bba521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675455  734452 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:46:51.675473  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 13:46:51.732537  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb ...
	I1213 13:46:51.732571  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb: {Name:mka68b1fc7336251712aa83c57233f6aaa26b56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732752  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb ...
	I1213 13:46:51.732766  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb: {Name:mk7b2188d2ac3de30be4a0ecf05771755b89586c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732898  734452 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt
	I1213 13:46:51.733002  734452 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key
	I1213 13:46:51.733072  734452 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:46:51.733091  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt with IP's: []
	I1213 13:46:51.768844  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt ...
	I1213 13:46:51.768876  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt: {Name:mk54ca537df717e699f15967f0763bc1a365ba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769051  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key ...
	I1213 13:46:51.769066  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key: {Name:mkc6731d5f061dd55c086b1529645fdd7e056a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769254  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:46:51.769294  734452 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:46:51.769306  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:46:51.769336  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:46:51.769363  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:46:51.769392  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:46:51.769438  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:51.770096  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:46:51.789179  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:46:51.807957  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:46:51.829246  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:46:51.849816  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:46:51.867382  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:46:51.884431  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:46:51.901499  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:46:51.918590  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:46:51.938587  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:46:51.956885  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:46:51.976711  734452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:46:51.990451  734452 ssh_runner.go:195] Run: openssl version
	I1213 13:46:51.996876  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.004771  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:46:52.013327  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017188  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017246  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.052182  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:46:52.060156  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:46:52.067555  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.074980  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:46:52.083293  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087008  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087060  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.121292  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.129202  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.136878  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.144894  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:46:52.152936  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156906  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156974  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.192626  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:46:52.200484  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:46:52.207749  734452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:46:52.211283  734452 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:46:52.211338  734452 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:52.211418  734452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:46:52.211486  734452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:46:52.238989  734452 cri.go:89] found id: ""
	I1213 13:46:52.239071  734452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:46:52.248678  734452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:46:52.257209  734452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:46:52.257267  734452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:46:52.265205  734452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:46:52.265226  734452 kubeadm.go:158] found existing configuration files:
	
	I1213 13:46:52.265280  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:46:52.273379  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:46:52.273433  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:46:52.280768  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:46:52.288560  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:46:52.288610  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:46:52.296093  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.303964  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:46:52.304023  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.311559  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:46:52.320197  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:46:52.320257  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:46:52.334065  734452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:46:52.371455  734452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 13:46:52.371571  734452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:46:52.442098  734452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:46:52.442200  734452 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:46:52.442255  734452 kubeadm.go:319] OS: Linux
	I1213 13:46:52.442323  734452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:46:52.442390  734452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:46:52.442455  734452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:46:52.442512  734452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:46:52.442578  734452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:46:52.442697  734452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:46:52.442826  734452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:46:52.442969  734452 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:46:52.508064  734452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:46:52.508249  734452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:46:52.508406  734452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:46:52.516288  734452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:46:52.519224  734452 out.go:252]   - Generating certificates and keys ...
	I1213 13:46:52.519355  734452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:46:52.519493  734452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:46:52.532097  734452 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:46:52.698464  734452 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:46:52.742997  734452 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:46:52.834618  734452 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:46:52.947440  734452 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:46:52.947607  734452 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.014857  734452 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:46:53.015046  734452 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.141370  734452 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:46:53.236321  734452 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:46:53.329100  734452 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:46:53.329196  734452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:46:53.418157  734452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:46:53.508241  734452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:46:53.569616  734452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:46:53.618621  734452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:46:53.646993  734452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:46:53.647697  734452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:46:53.651749  734452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:46:53.653114  734452 out.go:252]   - Booting up control plane ...
	I1213 13:46:53.653242  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:46:53.653571  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:46:53.654959  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:46:53.677067  734452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:46:53.677242  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:46:53.684167  734452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:46:53.684396  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:46:53.684462  734452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:46:53.802893  734452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:46:53.803078  734452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 13 13:46:18 no-preload-992258 crio[570]: time="2025-12-13T13:46:18.181992531Z" level=info msg="Started container" PID=1764 containerID=f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper id=6acd9911-334d-441a-ac48-dd8fd737a26d name=/runtime.v1.RuntimeService/StartContainer sandboxID=85dc96530c6c5b2e7edd0695e2a10ebe79f1c429439d7a2607d250579491568d
	Dec 13 13:46:19 no-preload-992258 crio[570]: time="2025-12-13T13:46:19.215170726Z" level=info msg="Removing container: 01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821" id=a4fe5e0c-a5bf-4641-a48e-5f934e6d7117 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:19 no-preload-992258 crio[570]: time="2025-12-13T13:46:19.228855754Z" level=info msg="Removed container 01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=a4fe5e0c-a5bf-4641-a48e-5f934e6d7117 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.239041845Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d66d4761-f9f2-4db4-a817-78f7f6c14991 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.240060035Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=741e86a0-c57c-40a4-b298-312a7bb67559 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.241149172Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f5487051-0431-4c86-96dc-cecb8795180d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.241284103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.246265844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.246457393Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/18ef10547fb4ad8c95ff1ed2a4e31b156ad847e94010d9a1b6b90190825e25ba/merged/etc/passwd: no such file or directory"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.246487135Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/18ef10547fb4ad8c95ff1ed2a4e31b156ad847e94010d9a1b6b90190825e25ba/merged/etc/group: no such file or directory"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.247204886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.273484748Z" level=info msg="Created container 2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb: kube-system/storage-provisioner/storage-provisioner" id=f5487051-0431-4c86-96dc-cecb8795180d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.274112101Z" level=info msg="Starting container: 2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb" id=c69ddfb9-4df8-42bb-aced-31e4a3695ec5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:28 no-preload-992258 crio[570]: time="2025-12-13T13:46:28.276096983Z" level=info msg="Started container" PID=1782 containerID=2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb description=kube-system/storage-provisioner/storage-provisioner id=c69ddfb9-4df8-42bb-aced-31e4a3695ec5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05567250c80e881eca8da917698f8331b9417dd48cc02db837117e029e118dfb
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.121865492Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3aa28506-d521-4478-bfac-ccb6571e23b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.199305041Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aba35828-20ef-4d1b-ac2d-af7a788de35c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.200535265Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=590c31b4-0504-4667-9721-d2bd64444308 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.20066377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.229465029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.230141613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.362321651Z" level=info msg="Created container adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=590c31b4-0504-4667-9721-d2bd64444308 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.363017477Z" level=info msg="Starting container: adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a" id=0cd70670-f1b0-4be8-89f9-6c4de260d414 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:42 no-preload-992258 crio[570]: time="2025-12-13T13:46:42.365507661Z" level=info msg="Started container" PID=1818 containerID=adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper id=0cd70670-f1b0-4be8-89f9-6c4de260d414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=85dc96530c6c5b2e7edd0695e2a10ebe79f1c429439d7a2607d250579491568d
	Dec 13 13:46:43 no-preload-992258 crio[570]: time="2025-12-13T13:46:43.283806114Z" level=info msg="Removing container: f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9" id=824b65d6-8ee2-4f6c-9724-e576033f3f16 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:43 no-preload-992258 crio[570]: time="2025-12-13T13:46:43.810124076Z" level=info msg="Removed container f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp/dashboard-metrics-scraper" id=824b65d6-8ee2-4f6c-9724-e576033f3f16 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	adb9dcbdc1b93       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   85dc96530c6c5       dashboard-metrics-scraper-867fb5f87b-sj6kp   kubernetes-dashboard
	2d56b5a331f4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   05567250c80e8       storage-provisioner                          kube-system
	b5e5b43f17886       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   51 seconds ago       Running             kubernetes-dashboard        0                   b7674a0570adb       kubernetes-dashboard-b84665fb8-dkjpg         kubernetes-dashboard
	801de776f7692       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           58 seconds ago       Running             coredns                     0                   b90cf7b42ff16       coredns-7d764666f9-qfkgp                     kube-system
	d92802dbdc886       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   c36f5cf0a3dd5       busybox                                      default
	263fb19e23abb       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           58 seconds ago       Running             kube-proxy                  0                   d284825762c56       kube-proxy-sjrzk                             kube-system
	c671fd402d975       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   a1bca9a4dfbfc       kindnet-2n8ks                                kube-system
	a78a5599787f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   05567250c80e8       storage-provisioner                          kube-system
	45bf7a76efd36       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           About a minute ago   Running             kube-controller-manager     0                   23ab819fffeee       kube-controller-manager-no-preload-992258    kube-system
	9562ef2afadd5       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           About a minute ago   Running             kube-apiserver              0                   8153a0e592018       kube-apiserver-no-preload-992258             kube-system
	8dcbdf570cbc8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   4fe9cf3b72557       etcd-no-preload-992258                       kube-system
	15112b75b1e5d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           About a minute ago   Running             kube-scheduler              0                   6d759f3cb1f2c       kube-scheduler-no-preload-992258             kube-system
	
	
	==> coredns [801de776f76929010a0f1c9e14f42cda1b053140754f6395d039186175e1ea80] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45155 - 8378 "HINFO IN 5065514542116602580.4737308741301018469. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.105160759s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-992258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-992258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=no-preload-992258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_44_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:44:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-992258
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:44:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:27 +0000   Sat, 13 Dec 2025 13:45:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-992258
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                a54834e7-7b06-490e-bc63-9fe908fc9136
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-7d764666f9-qfkgp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-992258                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-2n8ks                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-992258              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-992258     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-sjrzk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-992258              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-sj6kp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-dkjpg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node no-preload-992258 event: Registered Node no-preload-992258 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-992258 event: Registered Node no-preload-992258 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [8dcbdf570cbc878b3202fdfd071d0477d8d282c28592111b59e9f42fd44842b9] <==
	{"level":"warn","ts":"2025-12-13T13:45:56.057975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.069082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.076464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.084599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.093091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.101837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.109931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.120280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.128896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.137920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.146042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.157290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.162134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.170706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.178860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.186959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.194832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.203345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.211194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.219802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.235935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.244247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.252705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.260435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:45:56.324876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53396","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:46:55 up  2:29,  0 user,  load average: 5.88, 4.37, 2.76
	Linux no-preload-992258 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c671fd402d975e8ae24c777b924655550f32179268205781b348f0a491b5526f] <==
	I1213 13:45:57.654553       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:45:57.654848       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 13:45:57.655053       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:45:57.655074       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:45:57.655105       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:45:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:45:57.951319       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:45:57.951370       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:45:57.951389       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:45:57.952888       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:45:58.351578       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:45:58.351605       1 metrics.go:72] Registering metrics
	I1213 13:45:58.351679       1 controller.go:711] "Syncing nftables rules"
	I1213 13:46:07.858184       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:07.858233       1 main.go:301] handling current node
	I1213 13:46:17.860963       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:17.861001       1 main.go:301] handling current node
	I1213 13:46:27.857359       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:27.857401       1 main.go:301] handling current node
	I1213 13:46:37.857933       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:37.857997       1 main.go:301] handling current node
	I1213 13:46:47.857962       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 13:46:47.858043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9562ef2afadd58588eb9f2ee3f8f0cf7f987ad9ae64f202a3c2bc83ff04864c0] <==
	I1213 13:45:56.919385       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 13:45:56.919474       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:45:56.919484       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:45:56.919570       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 13:45:56.919619       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 13:45:56.921035       1 aggregator.go:187] initial CRD sync complete...
	I1213 13:45:56.921049       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:45:56.921056       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:45:56.921062       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:45:56.926822       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1213 13:45:56.927298       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 13:45:56.942663       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:45:56.944733       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 13:45:56.959383       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:45:57.153035       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:45:57.289920       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:45:57.319190       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:45:57.341576       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:45:57.348535       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:45:57.387942       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.217.154"}
	I1213 13:45:57.396545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.182.84"}
	I1213 13:45:57.811636       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 13:46:00.480766       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:46:00.681546       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:46:00.776986       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [45bf7a76efd360f1d23c44bb11c5c8a0f673954074b69b3130fea721533cb52c] <==
	I1213 13:46:00.082301       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082302       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082509       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082580       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082658       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082910       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.082957       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.083963       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.083976       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.084010       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.087628       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.087648       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:46:00.088072       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089859       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089878       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089916       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089938       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089955       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.089989       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.091379       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.094938       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.188202       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.189297       1 shared_informer.go:377] "Caches are synced"
	I1213 13:46:00.189315       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:46:00.189320       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [263fb19e23abb4d9244914e355a9fe801ab84ad39c6d84f9d3d30afae3172ba2] <==
	I1213 13:45:57.538531       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:45:57.604283       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:45:57.704565       1 shared_informer.go:377] "Caches are synced"
	I1213 13:45:57.704608       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 13:45:57.704678       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:45:57.722769       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:45:57.722851       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:45:57.727683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:45:57.728068       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:45:57.728089       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:57.729680       1 config.go:309] "Starting node config controller"
	I1213 13:45:57.729702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:45:57.729752       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:45:57.729759       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:45:57.729805       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:45:57.730365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:45:57.730467       1 config.go:200] "Starting service config controller"
	I1213 13:45:57.730511       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:45:57.829903       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:45:57.829915       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:45:57.830818       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:45:57.830849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [15112b75b1e5daf4777acbd4a1bc72aa48be95dbc7a9d989384f13be2d385572] <==
	I1213 13:45:54.841227       1 serving.go:386] Generated self-signed cert in-memory
	W1213 13:45:56.830485       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 13:45:56.830538       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:45:56.830550       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 13:45:56.830558       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 13:45:56.895992       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 13:45:56.896105       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:45:56.899346       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:45:56.899590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:45:56.900826       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:45:56.900303       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:45:57.001186       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 13:46:18 no-preload-992258 kubelet[721]: I1213 13:46:18.120827     721 scope.go:122] "RemoveContainer" containerID="01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821"
	Dec 13 13:46:18 no-preload-992258 kubelet[721]: E1213 13:46:18.206902     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:18 no-preload-992258 kubelet[721]: I1213 13:46:18.218813     721 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podStartSLOduration=1.099450118 podStartE2EDuration="18.218794458s" podCreationTimestamp="2025-12-13 13:46:00 +0000 UTC" firstStartedPulling="2025-12-13 13:46:01.004922072 +0000 UTC m=+6.972561678" lastFinishedPulling="2025-12-13 13:46:18.124266398 +0000 UTC m=+24.091906018" observedRunningTime="2025-12-13 13:46:18.21871469 +0000 UTC m=+24.186354315" watchObservedRunningTime="2025-12-13 13:46:18.218794458 +0000 UTC m=+24.186434087"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: I1213 13:46:19.212928     721 scope.go:122] "RemoveContainer" containerID="01e33e6a8976b6c42ccb91fd81806b25ffa4f585d630656b0195a61725bbf821"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: E1213 13:46:19.213896     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: I1213 13:46:19.213940     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:19 no-preload-992258 kubelet[721]: E1213 13:46:19.214141     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:26 no-preload-992258 kubelet[721]: E1213 13:46:26.870467     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:26 no-preload-992258 kubelet[721]: I1213 13:46:26.870503     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:26 no-preload-992258 kubelet[721]: E1213 13:46:26.870718     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:28 no-preload-992258 kubelet[721]: I1213 13:46:28.238553     721 scope.go:122] "RemoveContainer" containerID="a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b"
	Dec 13 13:46:37 no-preload-992258 kubelet[721]: E1213 13:46:37.323686     721 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-qfkgp" containerName="coredns"
	Dec 13 13:46:42 no-preload-992258 kubelet[721]: E1213 13:46:42.121337     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:42 no-preload-992258 kubelet[721]: I1213 13:46:42.121376     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: I1213 13:46:43.282505     721 scope.go:122] "RemoveContainer" containerID="f8965554043a6f77f855491f3838ee4fdfe0a5709067e4b790a74ea8832af5c9"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: E1213 13:46:43.282723     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: I1213 13:46:43.282754     721 scope.go:122] "RemoveContainer" containerID="adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	Dec 13 13:46:43 no-preload-992258 kubelet[721]: E1213 13:46:43.282995     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:46 no-preload-992258 kubelet[721]: E1213 13:46:46.871100     721 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" containerName="dashboard-metrics-scraper"
	Dec 13 13:46:46 no-preload-992258 kubelet[721]: I1213 13:46:46.871145     721 scope.go:122] "RemoveContainer" containerID="adb9dcbdc1b93162ba6534511972d54a30ff5daeb5209b44d9548d7732ab6c8a"
	Dec 13 13:46:46 no-preload-992258 kubelet[721]: E1213 13:46:46.871323     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-sj6kp_kubernetes-dashboard(7120637b-230f-468a-afeb-c8e414127e61)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-sj6kp" podUID="7120637b-230f-468a-afeb-c8e414127e61"
	Dec 13 13:46:51 no-preload-992258 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:46:51 no-preload-992258 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:46:51 no-preload-992258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:46:51 no-preload-992258 systemd[1]: kubelet.service: Consumed 1.797s CPU time.
	
	
	==> kubernetes-dashboard [b5e5b43f17886e53be725c2b848298a4b1825dc9c18fa4ea1aec41a64b43407d] <==
	2025/12/13 13:46:04 Starting overwatch
	2025/12/13 13:46:04 Using namespace: kubernetes-dashboard
	2025/12/13 13:46:04 Using in-cluster config to connect to apiserver
	2025/12/13 13:46:04 Using secret token for csrf signing
	2025/12/13 13:46:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:46:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:46:04 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/13 13:46:04 Generating JWE encryption key
	2025/12/13 13:46:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:46:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:46:04 Initializing JWE encryption key from synchronized object
	2025/12/13 13:46:04 Creating in-cluster Sidecar client
	2025/12/13 13:46:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:04 Serving insecurely on HTTP port: 9090
	2025/12/13 13:46:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2d56b5a331f4ca246a909031b824e174119e2403583309ef04c380d1092001eb] <==
	I1213 13:46:28.296367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:28.296412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:46:28.298261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:31.753131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:36.013721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:39.614917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:42.674385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:45.697342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:45.702471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:46:45.702637       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:46:45.702864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-992258_bb3e5d11-8254-4306-b990-23cb0fb499e0!
	I1213 13:46:45.702822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8639f770-ba62-4b85-93df-6f4c8eca72ae", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-992258_bb3e5d11-8254-4306-b990-23cb0fb499e0 became leader
	W1213 13:46:45.705577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:45.709669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:46:45.803388       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-992258_bb3e5d11-8254-4306-b990-23cb0fb499e0!
	W1213 13:46:47.713204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:47.717885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:49.721328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:49.725811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:51.728426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:51.732292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:53.735071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:53.740470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:55.743489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:55.747581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a78a5599787f3499391e0c432d3d1abd39385a3618ed54a5cba6601b8a71284b] <==
	I1213 13:45:57.494896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:46:27.497259       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-992258 -n no-preload-992258
E1213 13:46:56.284110  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-992258 -n no-preload-992258: exit status 2 (402.539106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-992258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-973953 --alsologtostderr -v=1
E1213 13:46:58.207532  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-973953 --alsologtostderr -v=1: exit status 80 (2.442848921s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-973953 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:46:57.793439  739466 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:57.793676  739466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:57.793682  739466 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:57.793686  739466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:57.793872  739466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:57.794108  739466 out.go:368] Setting JSON to false
	I1213 13:46:57.794129  739466 mustload.go:66] Loading cluster: embed-certs-973953
	I1213 13:46:57.794513  739466 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:57.794936  739466 cli_runner.go:164] Run: docker container inspect embed-certs-973953 --format={{.State.Status}}
	I1213 13:46:57.813753  739466 host.go:66] Checking if "embed-certs-973953" exists ...
	I1213 13:46:57.814164  739466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:57.881541  739466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-13 13:46:57.868702411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:57.882200  739466 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-973953 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 13:46:57.884675  739466 out.go:179] * Pausing node embed-certs-973953 ... 
	I1213 13:46:57.885726  739466 host.go:66] Checking if "embed-certs-973953" exists ...
	I1213 13:46:57.886013  739466 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:57.886060  739466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-973953
	I1213 13:46:57.904922  739466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/embed-certs-973953/id_rsa Username:docker}
	I1213 13:46:58.000019  739466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:58.012993  739466 pause.go:52] kubelet running: true
	I1213 13:46:58.013097  739466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:58.186060  739466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:58.186216  739466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:58.273455  739466 cri.go:89] found id: "efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d"
	I1213 13:46:58.273497  739466 cri.go:89] found id: "7c492a13369dcfd1ee3f016e954fbecf54508fa7ba80fcd6015ec64cf928a302"
	I1213 13:46:58.273505  739466 cri.go:89] found id: "a3bd12ac5959fa76ebe71bcd6e4bce6459412f36c9ca3212eaeb9f821e6a2c7e"
	I1213 13:46:58.273512  739466 cri.go:89] found id: "6c555c3d5d969e912b7a13fc6ea032d9b5037a541f10e177ed9f435d13f5bf08"
	I1213 13:46:58.273517  739466 cri.go:89] found id: "25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072"
	I1213 13:46:58.273525  739466 cri.go:89] found id: "ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674"
	I1213 13:46:58.273530  739466 cri.go:89] found id: "63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669"
	I1213 13:46:58.273536  739466 cri.go:89] found id: "447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5"
	I1213 13:46:58.273547  739466 cri.go:89] found id: "628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e"
	I1213 13:46:58.273573  739466 cri.go:89] found id: "88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	I1213 13:46:58.273587  739466 cri.go:89] found id: "829175c211a730469b696ac526ac2cf801bcf3f3786e55f7b59979ffe20b709e"
	I1213 13:46:58.273593  739466 cri.go:89] found id: ""
	I1213 13:46:58.273650  739466 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:58.289300  739466 retry.go:31] will retry after 157.897325ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:58Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:58.447738  739466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:58.471849  739466 pause.go:52] kubelet running: false
	I1213 13:46:58.471917  739466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:58.663053  739466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:58.663174  739466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:58.737139  739466 cri.go:89] found id: "efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d"
	I1213 13:46:58.737167  739466 cri.go:89] found id: "7c492a13369dcfd1ee3f016e954fbecf54508fa7ba80fcd6015ec64cf928a302"
	I1213 13:46:58.737173  739466 cri.go:89] found id: "a3bd12ac5959fa76ebe71bcd6e4bce6459412f36c9ca3212eaeb9f821e6a2c7e"
	I1213 13:46:58.737176  739466 cri.go:89] found id: "6c555c3d5d969e912b7a13fc6ea032d9b5037a541f10e177ed9f435d13f5bf08"
	I1213 13:46:58.737179  739466 cri.go:89] found id: "25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072"
	I1213 13:46:58.737183  739466 cri.go:89] found id: "ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674"
	I1213 13:46:58.737185  739466 cri.go:89] found id: "63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669"
	I1213 13:46:58.737188  739466 cri.go:89] found id: "447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5"
	I1213 13:46:58.737193  739466 cri.go:89] found id: "628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e"
	I1213 13:46:58.737212  739466 cri.go:89] found id: "88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	I1213 13:46:58.737228  739466 cri.go:89] found id: "829175c211a730469b696ac526ac2cf801bcf3f3786e55f7b59979ffe20b709e"
	I1213 13:46:58.737238  739466 cri.go:89] found id: ""
	I1213 13:46:58.737282  739466 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:58.749808  739466 retry.go:31] will retry after 521.521754ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:58Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:59.271487  739466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:59.285631  739466 pause.go:52] kubelet running: false
	I1213 13:46:59.285695  739466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:46:59.461027  739466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:46:59.461114  739466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:46:59.549080  739466 cri.go:89] found id: "efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d"
	I1213 13:46:59.549109  739466 cri.go:89] found id: "7c492a13369dcfd1ee3f016e954fbecf54508fa7ba80fcd6015ec64cf928a302"
	I1213 13:46:59.549116  739466 cri.go:89] found id: "a3bd12ac5959fa76ebe71bcd6e4bce6459412f36c9ca3212eaeb9f821e6a2c7e"
	I1213 13:46:59.549121  739466 cri.go:89] found id: "6c555c3d5d969e912b7a13fc6ea032d9b5037a541f10e177ed9f435d13f5bf08"
	I1213 13:46:59.549125  739466 cri.go:89] found id: "25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072"
	I1213 13:46:59.549130  739466 cri.go:89] found id: "ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674"
	I1213 13:46:59.549135  739466 cri.go:89] found id: "63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669"
	I1213 13:46:59.549139  739466 cri.go:89] found id: "447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5"
	I1213 13:46:59.549144  739466 cri.go:89] found id: "628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e"
	I1213 13:46:59.549152  739466 cri.go:89] found id: "88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	I1213 13:46:59.549157  739466 cri.go:89] found id: "829175c211a730469b696ac526ac2cf801bcf3f3786e55f7b59979ffe20b709e"
	I1213 13:46:59.549161  739466 cri.go:89] found id: ""
	I1213 13:46:59.549207  739466 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:46:59.563885  739466 retry.go:31] will retry after 338.951359ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:46:59Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:46:59.903478  739466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:59.916122  739466 pause.go:52] kubelet running: false
	I1213 13:46:59.916192  739466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:00.066130  739466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:00.066223  739466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:00.142051  739466 cri.go:89] found id: "efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d"
	I1213 13:47:00.142077  739466 cri.go:89] found id: "7c492a13369dcfd1ee3f016e954fbecf54508fa7ba80fcd6015ec64cf928a302"
	I1213 13:47:00.142083  739466 cri.go:89] found id: "a3bd12ac5959fa76ebe71bcd6e4bce6459412f36c9ca3212eaeb9f821e6a2c7e"
	I1213 13:47:00.142088  739466 cri.go:89] found id: "6c555c3d5d969e912b7a13fc6ea032d9b5037a541f10e177ed9f435d13f5bf08"
	I1213 13:47:00.142092  739466 cri.go:89] found id: "25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072"
	I1213 13:47:00.142096  739466 cri.go:89] found id: "ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674"
	I1213 13:47:00.142100  739466 cri.go:89] found id: "63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669"
	I1213 13:47:00.142105  739466 cri.go:89] found id: "447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5"
	I1213 13:47:00.142109  739466 cri.go:89] found id: "628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e"
	I1213 13:47:00.142132  739466 cri.go:89] found id: "88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	I1213 13:47:00.142137  739466 cri.go:89] found id: "829175c211a730469b696ac526ac2cf801bcf3f3786e55f7b59979ffe20b709e"
	I1213 13:47:00.142141  739466 cri.go:89] found id: ""
	I1213 13:47:00.142193  739466 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:00.157942  739466 out.go:203] 
	W1213 13:47:00.159455  739466 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:47:00.159493  739466 out.go:285] * 
	* 
	W1213 13:47:00.166184  739466 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:47:00.169301  739466 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-973953 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-973953
helpers_test.go:244: (dbg) docker inspect embed-certs-973953:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4",
	        "Created": "2025-12-13T13:44:57.200288812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 726630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:46:01.228368152Z",
	            "FinishedAt": "2025-12-13T13:45:59.801006561Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/hosts",
	        "LogPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4-json.log",
	        "Name": "/embed-certs-973953",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-973953:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-973953",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4",
	                "LowerDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-973953",
	                "Source": "/var/lib/docker/volumes/embed-certs-973953/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-973953",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-973953",
	                "name.minikube.sigs.k8s.io": "embed-certs-973953",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "553d611c04f57618fddfc8bb6cc87d1daf5a01e93e0177eef4b6f7058ff94334",
	            "SandboxKey": "/var/run/docker/netns/553d611c04f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-973953": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdd21ce485b56ca4b32dd68df0837eaa769f5169ec1531dea2c7dd03d846c883",
	                    "EndpointID": "fbb2fe3ad028d0d3fd6a39f9e66f15740a131fb75a7b4266ac55ada44c320614",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "96:37:ee:f4:fe:8f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-973953",
	                        "2417f9c18402"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953: exit status 2 (327.025505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-973953 logs -n 25
E1213 13:47:00.769234  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-973953 logs -n 25: (1.126635261s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:38.807259  734452 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:38.807356  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807364  734452 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:38.807368  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807581  734452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:38.808124  734452 out.go:368] Setting JSON to false
	I1213 13:46:38.809505  734452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8947,"bootTime":1765624652,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:38.809572  734452 start.go:143] virtualization: kvm guest
	I1213 13:46:38.811798  734452 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:38.813823  734452 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:38.813876  734452 notify.go:221] Checking for updates...
	I1213 13:46:38.816262  734452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:38.817585  734452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:38.818693  734452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:38.820057  734452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:38.821335  734452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:38.823198  734452 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823338  734452 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823469  734452 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:38.823581  734452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:38.861614  734452 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:38.861761  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:38.931148  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.919230241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:38.931318  734452 docker.go:319] overlay module found
	I1213 13:46:38.933289  734452 out.go:179] * Using the docker driver based on user configuration
	I1213 13:46:38.934577  734452 start.go:309] selected driver: docker
	I1213 13:46:38.934599  734452 start.go:927] validating driver "docker" against <nil>
	I1213 13:46:38.934616  734452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:38.935491  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:39.004706  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.992987781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:39.004928  734452 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 13:46:39.004966  734452 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 13:46:39.005271  734452 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:46:39.007551  734452 out.go:179] * Using Docker driver with root privileges
	I1213 13:46:39.008611  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:39.008719  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:39.008737  734452 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:46:39.008854  734452 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:39.010974  734452 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:46:39.012247  734452 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:39.013645  734452 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:39.016856  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.016895  734452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:39.016914  734452 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:39.016962  734452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:39.017009  734452 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:39.017022  734452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:46:39.017144  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:39.017168  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json: {Name:mk03f8124fe1745099f3d3cb3fe7fe5ae5e6b929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:39.044079  734452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:39.044103  734452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:39.044123  734452 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:39.044162  734452 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:39.044269  734452 start.go:364] duration metric: took 87.606µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:46:39.044501  734452 start.go:93] Provisioning new machine with config: &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:46:39.044595  734452 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:46:37.593032  723278 pod_ready.go:94] pod "coredns-7d764666f9-qfkgp" is "Ready"
	I1213 13:46:37.593060  723278 pod_ready.go:86] duration metric: took 39.506081408s for pod "coredns-7d764666f9-qfkgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.595721  723278 pod_ready.go:83] waiting for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.600005  723278 pod_ready.go:94] pod "etcd-no-preload-992258" is "Ready"
	I1213 13:46:37.600027  723278 pod_ready.go:86] duration metric: took 4.283645ms for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.602349  723278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.606335  723278 pod_ready.go:94] pod "kube-apiserver-no-preload-992258" is "Ready"
	I1213 13:46:37.606353  723278 pod_ready.go:86] duration metric: took 3.985408ms for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.608278  723278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.793439  723278 pod_ready.go:94] pod "kube-controller-manager-no-preload-992258" is "Ready"
	I1213 13:46:37.793538  723278 pod_ready.go:86] duration metric: took 185.240657ms for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.993814  723278 pod_ready.go:83] waiting for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.391287  723278 pod_ready.go:94] pod "kube-proxy-sjrzk" is "Ready"
	I1213 13:46:38.391316  723278 pod_ready.go:86] duration metric: took 397.467202ms for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.592664  723278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991819  723278 pod_ready.go:94] pod "kube-scheduler-no-preload-992258" is "Ready"
	I1213 13:46:38.991855  723278 pod_ready.go:86] duration metric: took 399.165979ms for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991870  723278 pod_ready.go:40] duration metric: took 40.907684385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:39.055074  723278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:46:39.056693  723278 out.go:179] * Done! kubectl is now configured to use "no-preload-992258" cluster and "default" namespace by default
	I1213 13:46:37.744577  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:46:37.744596  730912 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:46:37.744659  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.769735  730912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.769842  730912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:46:37.769924  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.769942  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.773997  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.806607  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.885020  730912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:37.892323  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:37.901908  730912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:37.908074  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:46:37.908095  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:46:37.924625  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.926038  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:46:37.926060  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:46:37.942015  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:46:37.942038  730912 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:46:37.961315  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:46:37.961339  730912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:46:37.979600  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:46:37.979629  730912 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:46:38.003635  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:46:38.003660  730912 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:46:38.019334  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:46:38.019359  730912 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:46:38.036465  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:46:38.036507  730912 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:46:38.053804  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:38.053835  730912 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:46:38.071650  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:39.597072  730912 node_ready.go:49] node "default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:39.597127  730912 node_ready.go:38] duration metric: took 1.695171527s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:39.597146  730912 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:39.597331  730912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:40.220696  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.328338683s)
	I1213 13:46:40.220801  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.296116857s)
	I1213 13:46:40.220919  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.149240842s)
	I1213 13:46:40.221000  730912 api_server.go:72] duration metric: took 2.51244991s to wait for apiserver process to appear ...
	I1213 13:46:40.221052  730912 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:40.221075  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.223057  730912 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-038239 addons enable metrics-server
	
	I1213 13:46:40.226524  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.226548  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:40.228246  730912 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:46:40.229402  730912 addons.go:530] duration metric: took 2.520798966s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1213 13:46:37.552331  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:39.558845  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:39.050825  734452 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:46:39.051127  734452 start.go:159] libmachine.API.Create for "newest-cni-362964" (driver="docker")
	I1213 13:46:39.051170  734452 client.go:173] LocalClient.Create starting
	I1213 13:46:39.051291  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:46:39.051338  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051367  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051431  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:46:39.051459  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051478  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051941  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:46:39.074137  734452 cli_runner.go:211] docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:46:39.074224  734452 network_create.go:284] running [docker network inspect newest-cni-362964] to gather additional debugging logs...
	I1213 13:46:39.074248  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964
	W1213 13:46:39.102273  734452 cli_runner.go:211] docker network inspect newest-cni-362964 returned with exit code 1
	I1213 13:46:39.102343  734452 network_create.go:287] error running [docker network inspect newest-cni-362964]: docker network inspect newest-cni-362964: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-362964 not found
	I1213 13:46:39.102377  734452 network_create.go:289] output of [docker network inspect newest-cni-362964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-362964 not found
	
	** /stderr **
	I1213 13:46:39.102549  734452 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:39.122483  734452 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:46:39.123444  734452 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:46:39.124137  734452 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:46:39.125173  734452 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8a30}
	I1213 13:46:39.125201  734452 network_create.go:124] attempt to create docker network newest-cni-362964 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:46:39.125260  734452 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-362964 newest-cni-362964
	I1213 13:46:39.179901  734452 network_create.go:108] docker network newest-cni-362964 192.168.76.0/24 created
	I1213 13:46:39.179928  734452 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-362964" container
	I1213 13:46:39.179979  734452 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:46:39.213973  734452 cli_runner.go:164] Run: docker volume create newest-cni-362964 --label name.minikube.sigs.k8s.io=newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:46:39.235544  734452 oci.go:103] Successfully created a docker volume newest-cni-362964
	I1213 13:46:39.235642  734452 cli_runner.go:164] Run: docker run --rm --name newest-cni-362964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --entrypoint /usr/bin/test -v newest-cni-362964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:46:39.751588  734452 oci.go:107] Successfully prepared a docker volume newest-cni-362964
	I1213 13:46:39.751676  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.751688  734452 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:46:39.751766  734452 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:46:40.721469  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.727005  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.727036  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:41.221758  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:41.227300  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1213 13:46:41.228302  730912 api_server.go:141] control plane version: v1.34.2
	I1213 13:46:41.228325  730912 api_server.go:131] duration metric: took 1.007264269s to wait for apiserver health ...
	I1213 13:46:41.228334  730912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:46:41.231822  730912 system_pods.go:59] 8 kube-system pods found
	I1213 13:46:41.231857  730912 system_pods.go:61] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.231869  730912 system_pods.go:61] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.231876  730912 system_pods.go:61] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.231882  730912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.231891  730912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.231897  730912 system_pods.go:61] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.231905  730912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.231912  730912 system_pods.go:61] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.231923  730912 system_pods.go:74] duration metric: took 3.580887ms to wait for pod list to return data ...
	I1213 13:46:41.231936  730912 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:46:41.234505  730912 default_sa.go:45] found service account: "default"
	I1213 13:46:41.234528  730912 default_sa.go:55] duration metric: took 2.585513ms for default service account to be created ...
	I1213 13:46:41.234537  730912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:46:41.237182  730912 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:41.237209  730912 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.237220  730912 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.237227  730912 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.237236  730912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.237245  730912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.237253  730912 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.237261  730912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.237271  730912 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.237279  730912 system_pods.go:126] duration metric: took 2.735704ms to wait for k8s-apps to be running ...
	I1213 13:46:41.237288  730912 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:46:41.237331  730912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:41.250597  730912 system_svc.go:56] duration metric: took 13.296933ms WaitForService to wait for kubelet
	I1213 13:46:41.250630  730912 kubeadm.go:587] duration metric: took 3.542081461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:41.250655  730912 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:46:41.254078  730912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:46:41.254103  730912 node_conditions.go:123] node cpu capacity is 8
	I1213 13:46:41.254126  730912 node_conditions.go:105] duration metric: took 3.462529ms to run NodePressure ...
	I1213 13:46:41.254141  730912 start.go:242] waiting for startup goroutines ...
	I1213 13:46:41.254155  730912 start.go:247] waiting for cluster config update ...
	I1213 13:46:41.254174  730912 start.go:256] writing updated cluster config ...
	I1213 13:46:41.254482  730912 ssh_runner.go:195] Run: rm -f paused
	I1213 13:46:41.258509  730912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:41.262286  730912 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 13:46:43.315769  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:42.051398  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:44.558674  726383 pod_ready.go:94] pod "coredns-66bc5c9577-bl59n" is "Ready"
	I1213 13:46:44.558713  726383 pod_ready.go:86] duration metric: took 32.012951382s for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.561144  726383 pod_ready.go:83] waiting for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.565899  726383 pod_ready.go:94] pod "etcd-embed-certs-973953" is "Ready"
	I1213 13:46:44.565923  726383 pod_ready.go:86] duration metric: took 4.7423ms for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.568261  726383 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.572565  726383 pod_ready.go:94] pod "kube-apiserver-embed-certs-973953" is "Ready"
	I1213 13:46:44.572592  726383 pod_ready.go:86] duration metric: took 4.304087ms for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.575031  726383 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.750453  726383 pod_ready.go:94] pod "kube-controller-manager-embed-certs-973953" is "Ready"
	I1213 13:46:44.750489  726383 pod_ready.go:86] duration metric: took 175.430643ms for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.951317  726383 pod_ready.go:83] waiting for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.350477  726383 pod_ready.go:94] pod "kube-proxy-jqcpv" is "Ready"
	I1213 13:46:45.350507  726383 pod_ready.go:86] duration metric: took 399.159038ms for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.550818  726383 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950357  726383 pod_ready.go:94] pod "kube-scheduler-embed-certs-973953" is "Ready"
	I1213 13:46:45.950385  726383 pod_ready.go:86] duration metric: took 399.541821ms for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950396  726383 pod_ready.go:40] duration metric: took 33.408030209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:46.003877  726383 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:46:46.006266  726383 out.go:179] * Done! kubectl is now configured to use "embed-certs-973953" cluster and "default" namespace by default
	I1213 13:46:43.827925  734452 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.0760512s)
	I1213 13:46:43.827966  734452 kic.go:203] duration metric: took 4.076273522s to extract preloaded images to volume ...
	W1213 13:46:43.828063  734452 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:46:43.828111  734452 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:46:43.828160  734452 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:46:43.885693  734452 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-362964 --name newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-362964 --network newest-cni-362964 --ip 192.168.76.2 --volume newest-cni-362964:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:46:44.183753  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Running}}
	I1213 13:46:44.203369  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.223422  734452 cli_runner.go:164] Run: docker exec newest-cni-362964 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:46:44.277034  734452 oci.go:144] the created container "newest-cni-362964" has a running status.
	I1213 13:46:44.277064  734452 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa...
	I1213 13:46:44.344914  734452 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:46:44.377198  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.402053  734452 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:46:44.402083  734452 kic_runner.go:114] Args: [docker exec --privileged newest-cni-362964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:46:44.478040  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.506931  734452 machine.go:94] provisionDockerMachine start ...
	I1213 13:46:44.507418  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:44.537001  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:44.537395  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:44.537427  734452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:46:44.538118  734452 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48464->127.0.0.1:33515: read: connection reset by peer
	I1213 13:46:47.689037  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.689072  734452 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:46:47.689140  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.712543  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.713000  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.713025  734452 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:46:47.873217  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.873318  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.896725  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.897081  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.897130  734452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:46:48.044203  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:46:48.044232  734452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:46:48.044289  734452 ubuntu.go:190] setting up certificates
	I1213 13:46:48.044304  734452 provision.go:84] configureAuth start
	I1213 13:46:48.044368  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.068662  734452 provision.go:143] copyHostCerts
	I1213 13:46:48.068728  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:46:48.068739  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:46:48.068879  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:46:48.069004  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:46:48.069048  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:46:48.069113  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:46:48.069294  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:46:48.069312  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:46:48.069355  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:46:48.069462  734452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:46:48.220174  734452 provision.go:177] copyRemoteCerts
	I1213 13:46:48.220240  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:46:48.220284  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.242055  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:48.348835  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:46:48.372845  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:46:48.394838  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:46:48.416450  734452 provision.go:87] duration metric: took 372.119155ms to configureAuth
	I1213 13:46:48.416488  734452 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:46:48.416718  734452 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:48.416935  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.438340  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:48.438572  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:48.438593  734452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:46:48.772615  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:46:48.772642  734452 machine.go:97] duration metric: took 4.265315999s to provisionDockerMachine
	I1213 13:46:48.772654  734452 client.go:176] duration metric: took 9.721476668s to LocalClient.Create
	I1213 13:46:48.772675  734452 start.go:167] duration metric: took 9.721549598s to libmachine.API.Create "newest-cni-362964"
	I1213 13:46:48.772685  734452 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:46:48.772700  734452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:46:48.772766  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:46:48.772846  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.796130  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	W1213 13:46:45.768717  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:48.269155  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:48.906093  734452 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:46:48.910767  734452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:46:48.910823  734452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:46:48.910839  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:46:48.910910  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:46:48.911037  734452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:46:48.911209  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:46:48.921911  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:48.947921  734452 start.go:296] duration metric: took 175.219125ms for postStartSetup
	I1213 13:46:48.948314  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.972402  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:48.972688  734452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:46:48.972732  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.995624  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.100377  734452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:46:49.106414  734452 start.go:128] duration metric: took 10.061800408s to createHost
	I1213 13:46:49.106444  734452 start.go:83] releasing machines lock for "newest-cni-362964", held for 10.062163513s
	I1213 13:46:49.106521  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:49.131359  734452 ssh_runner.go:195] Run: cat /version.json
	I1213 13:46:49.131430  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.131434  734452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:46:49.131534  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.155684  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.156118  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.345845  734452 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:49.354872  734452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:46:49.402808  734452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:46:49.408988  734452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:46:49.409066  734452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:46:49.440997  734452 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:46:49.441025  734452 start.go:496] detecting cgroup driver to use...
	I1213 13:46:49.441060  734452 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:46:49.441115  734452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:46:49.462316  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:46:49.477713  734452 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:46:49.477795  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:46:49.501648  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:46:49.526524  734452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:46:49.629504  734452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:46:49.728940  734452 docker.go:234] disabling docker service ...
	I1213 13:46:49.729008  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:46:49.751594  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:46:49.766407  734452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:46:49.855523  734452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:46:49.940562  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:46:49.953965  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:46:49.968209  734452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:46:49.968288  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.979551  734452 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:46:49.979626  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.988154  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.997026  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.005337  734452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:46:50.013019  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.021641  734452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.035024  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.043264  734452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:46:50.050409  734452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:46:50.057213  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:50.144700  734452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:46:51.023735  734452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:46:51.023835  734452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:46:51.028520  734452 start.go:564] Will wait 60s for crictl version
	I1213 13:46:51.028585  734452 ssh_runner.go:195] Run: which crictl
	I1213 13:46:51.032526  734452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:46:51.058397  734452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:46:51.058490  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.086747  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.117725  734452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:46:51.118756  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:51.138994  734452 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:46:51.143167  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.155706  734452 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:46:51.156802  734452 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:46:51.156953  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:51.157039  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.198200  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.198221  734452 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:46:51.198267  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.225683  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.225709  734452 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:46:51.225719  734452 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:46:51.225843  734452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:46:51.225940  734452 ssh_runner.go:195] Run: crio config
	I1213 13:46:51.273702  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:51.273722  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:51.273741  734452 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:46:51.273768  734452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:46:51.273951  734452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:46:51.274024  734452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:46:51.282302  734452 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:46:51.282376  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:46:51.290422  734452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:46:51.303253  734452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:46:51.318075  734452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:46:51.331214  734452 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:46:51.334976  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.345829  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:51.437080  734452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:51.461201  734452 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:46:51.461228  734452 certs.go:195] generating shared ca certs ...
	I1213 13:46:51.461258  734452 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.461456  734452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:46:51.461517  734452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:46:51.461535  734452 certs.go:257] generating profile certs ...
	I1213 13:46:51.461611  734452 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:46:51.461644  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt with IP's: []
	I1213 13:46:51.675129  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt ...
	I1213 13:46:51.675163  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt: {Name:mkfc2919111fa26d81b7191d3873ecc598936940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675356  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key ...
	I1213 13:46:51.675368  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key: {Name:mkcca4e2f19072f042ecc8cce95f891ff7bba521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675455  734452 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:46:51.675473  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 13:46:51.732537  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb ...
	I1213 13:46:51.732571  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb: {Name:mka68b1fc7336251712aa83c57233f6aaa26b56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732752  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb ...
	I1213 13:46:51.732766  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb: {Name:mk7b2188d2ac3de30be4a0ecf05771755b89586c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732898  734452 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt
	I1213 13:46:51.733002  734452 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key
	I1213 13:46:51.733072  734452 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:46:51.733091  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt with IP's: []
	I1213 13:46:51.768844  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt ...
	I1213 13:46:51.768876  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt: {Name:mk54ca537df717e699f15967f0763bc1a365ba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769051  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key ...
	I1213 13:46:51.769066  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key: {Name:mkc6731d5f061dd55c086b1529645fdd7e056a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769254  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:46:51.769294  734452 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:46:51.769306  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:46:51.769336  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:46:51.769363  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:46:51.769392  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:46:51.769438  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:51.770096  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:46:51.789179  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:46:51.807957  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:46:51.829246  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:46:51.849816  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:46:51.867382  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:46:51.884431  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:46:51.901499  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:46:51.918590  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:46:51.938587  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:46:51.956885  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:46:51.976711  734452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:46:51.990451  734452 ssh_runner.go:195] Run: openssl version
	I1213 13:46:51.996876  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.004771  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:46:52.013327  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017188  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017246  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.052182  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:46:52.060156  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:46:52.067555  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.074980  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:46:52.083293  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087008  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087060  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.121292  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.129202  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.136878  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.144894  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:46:52.152936  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156906  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156974  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.192626  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:46:52.200484  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:46:52.207749  734452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:46:52.211283  734452 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:46:52.211338  734452 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:52.211418  734452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:46:52.211486  734452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:46:52.238989  734452 cri.go:89] found id: ""
	I1213 13:46:52.239071  734452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:46:52.248678  734452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:46:52.257209  734452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:46:52.257267  734452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:46:52.265205  734452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:46:52.265226  734452 kubeadm.go:158] found existing configuration files:
	
	I1213 13:46:52.265280  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:46:52.273379  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:46:52.273433  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:46:52.280768  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:46:52.288560  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:46:52.288610  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:46:52.296093  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.303964  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:46:52.304023  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.311559  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:46:52.320197  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:46:52.320257  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:46:52.334065  734452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:46:52.371455  734452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 13:46:52.371571  734452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:46:52.442098  734452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:46:52.442200  734452 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:46:52.442255  734452 kubeadm.go:319] OS: Linux
	I1213 13:46:52.442323  734452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:46:52.442390  734452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:46:52.442455  734452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:46:52.442512  734452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:46:52.442578  734452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:46:52.442697  734452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:46:52.442826  734452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:46:52.442969  734452 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:46:52.508064  734452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:46:52.508249  734452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:46:52.508406  734452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:46:52.516288  734452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:46:52.519224  734452 out.go:252]   - Generating certificates and keys ...
	I1213 13:46:52.519355  734452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:46:52.519493  734452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:46:52.532097  734452 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:46:52.698464  734452 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:46:52.742997  734452 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:46:52.834618  734452 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:46:52.947440  734452 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:46:52.947607  734452 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.014857  734452 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:46:53.015046  734452 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.141370  734452 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:46:53.236321  734452 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:46:53.329100  734452 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:46:53.329196  734452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:46:53.418157  734452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:46:53.508241  734452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:46:53.569616  734452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:46:53.618621  734452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:46:53.646993  734452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:46:53.647697  734452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:46:53.651749  734452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:46:53.653114  734452 out.go:252]   - Booting up control plane ...
	I1213 13:46:53.653242  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:46:53.653571  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:46:53.654959  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:46:53.677067  734452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:46:53.677242  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:46:53.684167  734452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:46:53.684396  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:46:53.684462  734452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:46:53.802893  734452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:46:53.803078  734452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 13:46:50.770133  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:53.268827  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:54.304306  734452 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.553214ms
	I1213 13:46:54.307257  734452 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:46:54.307404  734452 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 13:46:54.307545  734452 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:46:54.307659  734452 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:46:54.813847  734452 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 506.428042ms
	I1213 13:46:56.298840  734452 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.990746279s
	I1213 13:46:57.809364  734452 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502006023s
	I1213 13:46:57.828589  734452 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:46:57.839273  734452 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:46:57.849857  734452 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:46:57.850169  734452 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-362964 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:46:57.859609  734452 kubeadm.go:319] [bootstrap-token] Using token: wuq8cx.vg81wzcp5d3gm8z3
	I1213 13:46:57.861437  734452 out.go:252]   - Configuring RBAC rules ...
	I1213 13:46:57.861592  734452 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:46:57.864478  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:46:57.870588  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:46:57.873470  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:46:57.877170  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:46:57.880076  734452 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:46:58.219051  734452 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:46:58.659077  734452 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:46:59.216148  734452 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:46:59.217382  734452 kubeadm.go:319] 
	I1213 13:46:59.217471  734452 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:46:59.217510  734452 kubeadm.go:319] 
	I1213 13:46:59.217666  734452 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:46:59.217685  734452 kubeadm.go:319] 
	I1213 13:46:59.217725  734452 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:46:59.217827  734452 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:46:59.217929  734452 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:46:59.217945  734452 kubeadm.go:319] 
	I1213 13:46:59.218020  734452 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:46:59.218034  734452 kubeadm.go:319] 
	I1213 13:46:59.218103  734452 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:46:59.218116  734452 kubeadm.go:319] 
	I1213 13:46:59.218206  734452 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:46:59.218322  734452 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:46:59.218423  734452 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:46:59.218435  734452 kubeadm.go:319] 
	I1213 13:46:59.218551  734452 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:46:59.218673  734452 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:46:59.218689  734452 kubeadm.go:319] 
	I1213 13:46:59.218845  734452 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wuq8cx.vg81wzcp5d3gm8z3 \
	I1213 13:46:59.218984  734452 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 \
	I1213 13:46:59.219028  734452 kubeadm.go:319] 	--control-plane 
	I1213 13:46:59.219044  734452 kubeadm.go:319] 
	I1213 13:46:59.219172  734452 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:46:59.219183  734452 kubeadm.go:319] 
	I1213 13:46:59.219331  734452 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wuq8cx.vg81wzcp5d3gm8z3 \
	I1213 13:46:59.219511  734452 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 
	I1213 13:46:59.221855  734452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:46:59.222014  734452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:46:59.222057  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:59.222071  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:59.232701  734452 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1213 13:46:55.768853  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:58.272372  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 13:46:22 embed-certs-973953 crio[566]: time="2025-12-13T13:46:22.353109422Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 13:46:22 embed-certs-973953 crio[566]: time="2025-12-13T13:46:22.619370228Z" level=info msg="Removing container: 1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690" id=a2fe3143-ebe0-4347-a441-91c82c4810fd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:22 embed-certs-973953 crio[566]: time="2025-12-13T13:46:22.630863527Z" level=info msg="Removed container 1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=a2fe3143-ebe0-4347-a441-91c82c4810fd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.551416746Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d8f01804-e321-436b-9817-8e812cbddb50 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.552341885Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cecf08ee-1c3c-4162-b1a8-28e21e43dce4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.553386246Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=9426bf20-07aa-4c70-9a60-85464d455823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.553531798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.55933211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.559883616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.588662174Z" level=info msg="Created container 88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=9426bf20-07aa-4c70-9a60-85464d455823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.589538732Z" level=info msg="Starting container: 88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c" id=187c2d48-42c3-4fde-9518-be570d480a8b name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.592302327Z" level=info msg="Started container" PID=1777 containerID=88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper id=187c2d48-42c3-4fde-9518-be570d480a8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=89aa4101d58c72d9ed51b2f1cc864f2a088df528dcacd14be9ee59fa3a1aa29e
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.667239682Z" level=info msg="Removing container: 6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff" id=0433495a-2dfa-49e1-ab2c-4211c3e40c7d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.680942781Z" level=info msg="Removed container 6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=0433495a-2dfa-49e1-ab2c-4211c3e40c7d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.675121793Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2e52d7d2-efde-4eda-92c8-0e6c8ba839a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.759515941Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=babc7d15-1b16-45bd-be3a-3dad4c805663 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.760860716Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=09770379-3dab-4ddd-bab9-7c8618e5fece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.761014803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.882175484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.88236839Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/106bfe52c3a294748f7f195cb23ce6639daeb58e637d52c70a8bb6ef9c6890dc/merged/etc/passwd: no such file or directory"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.882404788Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/106bfe52c3a294748f7f195cb23ce6639daeb58e637d52c70a8bb6ef9c6890dc/merged/etc/group: no such file or directory"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.882675131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:43 embed-certs-973953 crio[566]: time="2025-12-13T13:46:43.091283438Z" level=info msg="Created container efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d: kube-system/storage-provisioner/storage-provisioner" id=09770379-3dab-4ddd-bab9-7c8618e5fece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:43 embed-certs-973953 crio[566]: time="2025-12-13T13:46:43.092011446Z" level=info msg="Starting container: efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d" id=e91c7fb1-cf73-48cd-9ac5-c388d39e25aa name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:43 embed-certs-973953 crio[566]: time="2025-12-13T13:46:43.094012093Z" level=info msg="Started container" PID=1793 containerID=efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d description=kube-system/storage-provisioner/storage-provisioner id=e91c7fb1-cf73-48cd-9ac5-c388d39e25aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=fcd212653d7f1e61b2f20aca2226d5b2653f8a8685e61b85fbd95da675a2ccf3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	efd7617437d9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   fcd212653d7f1       storage-provisioner                          kube-system
	88fa874dcae8e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   89aa4101d58c7       dashboard-metrics-scraper-6ffb444bf9-rdwkh   kubernetes-dashboard
	829175c211a73       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   d733563635a79       kubernetes-dashboard-855c9754f9-9zb5p        kubernetes-dashboard
	ee4a5f8af0e37       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   b3ab42e749eaa       busybox                                      default
	7c492a13369dc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   b5ce6d268bd4b       coredns-66bc5c9577-bl59n                     kube-system
	a3bd12ac5959f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   8a62fc7018533       kube-proxy-jqcpv                             kube-system
	6c555c3d5d969       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   19a849590ec40       kindnet-bw5d4                                kube-system
	25179d237bb92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   fcd212653d7f1       storage-provisioner                          kube-system
	ca59722508ee8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           52 seconds ago      Running             kube-apiserver              0                   3fff06e172c61       kube-apiserver-embed-certs-973953            kube-system
	63a2ba4a5a1d9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   ffee6fdd91e44       etcd-embed-certs-973953                      kube-system
	447b95afd76fc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           52 seconds ago      Running             kube-scheduler              0                   f8734f1314361       kube-scheduler-embed-certs-973953            kube-system
	628ec34c6d25d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           52 seconds ago      Running             kube-controller-manager     0                   aa08751bd0ac2       kube-controller-manager-embed-certs-973953   kube-system
	
	
	==> coredns [7c492a13369dcfd1ee3f016e954fbecf54508fa7ba80fcd6015ec64cf928a302] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42517 - 33976 "HINFO IN 9206848693974487165.7963474350790246474. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.175046976s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-973953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-973953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=embed-certs-973953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_45_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-973953
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-973953
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                03ac64dc-35d6-4a73-b891-f77762e89392
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-bl59n                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-embed-certs-973953                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-bw5d4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-973953             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-973953    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-jqcpv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-973953             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rdwkh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9zb5p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-973953 event: Registered Node embed-certs-973953 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-973953 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node embed-certs-973953 event: Registered Node embed-certs-973953 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669] <==
	{"level":"warn","ts":"2025-12-13T13:46:10.343800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.350165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.358002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.367581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.375429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.383173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.390622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.397313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.405340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.412667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.419656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.426657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.434219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.441923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.448926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.455263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.462215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.468764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.476712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.483454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.497764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.504381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.511334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.561682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:46:42.879604Z","caller":"traceutil/trace.go:172","msg":"trace[689963425] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"197.848307ms","start":"2025-12-13T13:46:42.681736Z","end":"2025-12-13T13:46:42.879584Z","steps":["trace[689963425] 'process raft request'  (duration: 196.921937ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:47:01 up  2:29,  0 user,  load average: 5.49, 4.32, 2.75
	Linux embed-certs-973953 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6c555c3d5d969e912b7a13fc6ea032d9b5037a541f10e177ed9f435d13f5bf08] <==
	I1213 13:46:12.129631       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:46:12.129891       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 13:46:12.130082       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:46:12.130106       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:46:12.130130       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:46:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:46:12.331876       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:46:12.331913       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:46:12.331930       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:46:12.332229       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:46:12.698401       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:46:12.698443       1 metrics.go:72] Registering metrics
	I1213 13:46:12.698522       1 controller.go:711] "Syncing nftables rules"
	I1213 13:46:22.331595       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:22.331689       1 main.go:301] handling current node
	I1213 13:46:32.338894       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:32.338920       1 main.go:301] handling current node
	I1213 13:46:42.331904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:42.331936       1 main.go:301] handling current node
	I1213 13:46:52.334248       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:52.334320       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674] <==
	I1213 13:46:11.029940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:46:11.029947       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:46:11.031011       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 13:46:11.031278       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 13:46:11.031476       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:46:11.031911       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:46:11.031989       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 13:46:11.032069       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 13:46:11.032137       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 13:46:11.031522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:46:11.039567       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:46:11.048481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:46:11.058386       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:46:11.324375       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:46:11.352065       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:46:11.370069       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:46:11.376117       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:46:11.382753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:46:11.416104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.37.235"}
	I1213 13:46:11.424989       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.230.164"}
	I1213 13:46:11.934095       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:46:14.536883       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:14.788601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:46:14.788686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:46:14.938334       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e] <==
	I1213 13:46:14.368041       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:46:14.369225       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 13:46:14.371567       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 13:46:14.384062       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 13:46:14.384089       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 13:46:14.384114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:46:14.384123       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:46:14.384125       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:46:14.384147       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:46:14.384287       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 13:46:14.384362       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 13:46:14.384458       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 13:46:14.384387       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 13:46:14.384467       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:46:14.384586       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 13:46:14.384750       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 13:46:14.384912       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-973953"
	I1213 13:46:14.384965       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 13:46:14.387750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 13:46:14.387821       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 13:46:14.388154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 13:46:14.389027       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:46:14.389075       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:46:14.391931       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 13:46:14.411221       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a3bd12ac5959fa76ebe71bcd6e4bce6459412f36c9ca3212eaeb9f821e6a2c7e] <==
	I1213 13:46:11.956577       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:46:12.029063       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:46:12.129920       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:46:12.129960       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 13:46:12.130102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:46:12.151038       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:46:12.151099       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:46:12.156148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:46:12.156514       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:46:12.156546       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:12.157822       1 config.go:200] "Starting service config controller"
	I1213 13:46:12.157845       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:46:12.157878       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:46:12.157885       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:46:12.157877       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:46:12.157900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:46:12.158069       1 config.go:309] "Starting node config controller"
	I1213 13:46:12.158085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:46:12.158093       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:46:12.258030       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:46:12.258095       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:46:12.258444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5] <==
	I1213 13:46:09.765852       1 serving.go:386] Generated self-signed cert in-memory
	W1213 13:46:10.963158       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 13:46:10.963207       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:46:10.963220       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 13:46:10.963230       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 13:46:10.989882       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 13:46:10.989914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:10.992936       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:10.992986       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:10.992997       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:46:10.993069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:46:11.094179       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975556     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nbd\" (UniqueName: \"kubernetes.io/projected/d2365a41-1a6e-44b7-9890-47de6820efdf-kube-api-access-w4nbd\") pod \"dashboard-metrics-scraper-6ffb444bf9-rdwkh\" (UID: \"d2365a41-1a6e-44b7-9890-47de6820efdf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh"
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975631     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c8daaac-7546-4f7d-a09c-a667c2a384b7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9zb5p\" (UID: \"4c8daaac-7546-4f7d-a09c-a667c2a384b7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zb5p"
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975661     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2cm8\" (UniqueName: \"kubernetes.io/projected/4c8daaac-7546-4f7d-a09c-a667c2a384b7-kube-api-access-m2cm8\") pod \"kubernetes-dashboard-855c9754f9-9zb5p\" (UID: \"4c8daaac-7546-4f7d-a09c-a667c2a384b7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zb5p"
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975687     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d2365a41-1a6e-44b7-9890-47de6820efdf-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rdwkh\" (UID: \"d2365a41-1a6e-44b7-9890-47de6820efdf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh"
	Dec 13 13:46:19 embed-certs-973953 kubelet[730]: I1213 13:46:19.825434     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zb5p" podStartSLOduration=2.594730658 podStartE2EDuration="5.825412413s" podCreationTimestamp="2025-12-13 13:46:14 +0000 UTC" firstStartedPulling="2025-12-13 13:46:15.18513591 +0000 UTC m=+6.723315431" lastFinishedPulling="2025-12-13 13:46:18.415817653 +0000 UTC m=+9.953997186" observedRunningTime="2025-12-13 13:46:18.620738391 +0000 UTC m=+10.158917932" watchObservedRunningTime="2025-12-13 13:46:19.825412413 +0000 UTC m=+11.363591955"
	Dec 13 13:46:21 embed-certs-973953 kubelet[730]: I1213 13:46:21.261167     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podStartSLOduration=1.89337065 podStartE2EDuration="7.261145577s" podCreationTimestamp="2025-12-13 13:46:14 +0000 UTC" firstStartedPulling="2025-12-13 13:46:15.185312005 +0000 UTC m=+6.723491538" lastFinishedPulling="2025-12-13 13:46:20.553086945 +0000 UTC m=+12.091266465" observedRunningTime="2025-12-13 13:46:20.619017278 +0000 UTC m=+12.157196818" watchObservedRunningTime="2025-12-13 13:46:21.261145577 +0000 UTC m=+12.799325119"
	Dec 13 13:46:21 embed-certs-973953 kubelet[730]: I1213 13:46:21.613747     730 scope.go:117] "RemoveContainer" containerID="1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690"
	Dec 13 13:46:22 embed-certs-973953 kubelet[730]: I1213 13:46:22.618018     730 scope.go:117] "RemoveContainer" containerID="1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690"
	Dec 13 13:46:22 embed-certs-973953 kubelet[730]: I1213 13:46:22.618182     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:22 embed-certs-973953 kubelet[730]: E1213 13:46:22.618395     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:23 embed-certs-973953 kubelet[730]: I1213 13:46:23.622360     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:23 embed-certs-973953 kubelet[730]: E1213 13:46:23.622599     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:28 embed-certs-973953 kubelet[730]: I1213 13:46:28.626956     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:28 embed-certs-973953 kubelet[730]: E1213 13:46:28.627129     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: I1213 13:46:40.550907     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: I1213 13:46:40.665634     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: I1213 13:46:40.665929     730 scope.go:117] "RemoveContainer" containerID="88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: E1213 13:46:40.666175     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:42 embed-certs-973953 kubelet[730]: I1213 13:46:42.674629     730 scope.go:117] "RemoveContainer" containerID="25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072"
	Dec 13 13:46:48 embed-certs-973953 kubelet[730]: I1213 13:46:48.626876     730 scope.go:117] "RemoveContainer" containerID="88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	Dec 13 13:46:48 embed-certs-973953 kubelet[730]: E1213 13:46:48.627109     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: kubelet.service: Consumed 1.626s CPU time.
	
	
	==> kubernetes-dashboard [829175c211a730469b696ac526ac2cf801bcf3f3786e55f7b59979ffe20b709e] <==
	2025/12/13 13:46:18 Using namespace: kubernetes-dashboard
	2025/12/13 13:46:18 Using in-cluster config to connect to apiserver
	2025/12/13 13:46:18 Using secret token for csrf signing
	2025/12/13 13:46:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:46:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:46:18 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 13:46:18 Generating JWE encryption key
	2025/12/13 13:46:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:46:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:46:18 Initializing JWE encryption key from synchronized object
	2025/12/13 13:46:18 Creating in-cluster Sidecar client
	2025/12/13 13:46:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:18 Serving insecurely on HTTP port: 9090
	2025/12/13 13:46:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:18 Starting overwatch
	
	
	==> storage-provisioner [25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072] <==
	I1213 13:46:11.922972       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:46:41.926842       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d] <==
	I1213 13:46:43.792130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:46:43.799522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:43.799632       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:46:43.803424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:47.258751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:51.519508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:55.117725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:58.171671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:01.194570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:01.198970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:01.199154       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:47:01.199267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f4fe34c-f4c1-4423-bf81-d96ad4a8dd1c", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-973953_8954321e-a4e4-4205-a6ba-883a28ddd10f became leader
	I1213 13:47:01.199315       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-973953_8954321e-a4e4-4205-a6ba-883a28ddd10f!
	W1213 13:47:01.201211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:01.204467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:01.299590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-973953_8954321e-a4e4-4205-a6ba-883a28ddd10f!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-973953 -n embed-certs-973953
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-973953 -n embed-certs-973953: exit status 2 (341.547878ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-973953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-973953
helpers_test.go:244: (dbg) docker inspect embed-certs-973953:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4",
	        "Created": "2025-12-13T13:44:57.200288812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 726630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:46:01.228368152Z",
	            "FinishedAt": "2025-12-13T13:45:59.801006561Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/hosts",
	        "LogPath": "/var/lib/docker/containers/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4/2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4-json.log",
	        "Name": "/embed-certs-973953",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-973953:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-973953",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2417f9c1840239bdbd95cda8d94a24c63c197abb274212b1cc09a3ca882e96e4",
	                "LowerDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36f6f9a6afe8167407de04e815de1558c807ba641d95def877516655555a8d70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-973953",
	                "Source": "/var/lib/docker/volumes/embed-certs-973953/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-973953",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-973953",
	                "name.minikube.sigs.k8s.io": "embed-certs-973953",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "553d611c04f57618fddfc8bb6cc87d1daf5a01e93e0177eef4b6f7058ff94334",
	            "SandboxKey": "/var/run/docker/netns/553d611c04f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-973953": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdd21ce485b56ca4b32dd68df0837eaa769f5169ec1531dea2c7dd03d846c883",
	                    "EndpointID": "fbb2fe3ad028d0d3fd6a39f9e66f15740a131fb75a7b4266ac55ada44c320614",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "96:37:ee:f4:fe:8f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-973953",
	                        "2417f9c18402"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953: exit status 2 (329.012657ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-973953 logs -n 25
E1213 13:47:03.567115  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:47:03.573725  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-973953 logs -n 25: (1.129322149s)
E1213 13:47:03.587489  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:47:03.609049  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p no-preload-992258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:38.807259  734452 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:38.807356  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807364  734452 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:38.807368  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807581  734452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:38.808124  734452 out.go:368] Setting JSON to false
	I1213 13:46:38.809505  734452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8947,"bootTime":1765624652,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:38.809572  734452 start.go:143] virtualization: kvm guest
	I1213 13:46:38.811798  734452 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:38.813823  734452 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:38.813876  734452 notify.go:221] Checking for updates...
	I1213 13:46:38.816262  734452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:38.817585  734452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:38.818693  734452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:38.820057  734452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:38.821335  734452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:38.823198  734452 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823338  734452 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823469  734452 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:38.823581  734452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:38.861614  734452 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:38.861761  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:38.931148  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.919230241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:38.931318  734452 docker.go:319] overlay module found
	I1213 13:46:38.933289  734452 out.go:179] * Using the docker driver based on user configuration
	I1213 13:46:38.934577  734452 start.go:309] selected driver: docker
	I1213 13:46:38.934599  734452 start.go:927] validating driver "docker" against <nil>
	I1213 13:46:38.934616  734452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:38.935491  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:39.004706  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.992987781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:39.004928  734452 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 13:46:39.004966  734452 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 13:46:39.005271  734452 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:46:39.007551  734452 out.go:179] * Using Docker driver with root privileges
	I1213 13:46:39.008611  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:39.008719  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:39.008737  734452 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:46:39.008854  734452 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:39.010974  734452 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:46:39.012247  734452 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:39.013645  734452 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:39.016856  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.016895  734452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:39.016914  734452 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:39.016962  734452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:39.017009  734452 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:39.017022  734452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:46:39.017144  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:39.017168  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json: {Name:mk03f8124fe1745099f3d3cb3fe7fe5ae5e6b929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:39.044079  734452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:39.044103  734452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:39.044123  734452 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:39.044162  734452 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:39.044269  734452 start.go:364] duration metric: took 87.606µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:46:39.044501  734452 start.go:93] Provisioning new machine with config: &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:46:39.044595  734452 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:46:37.593032  723278 pod_ready.go:94] pod "coredns-7d764666f9-qfkgp" is "Ready"
	I1213 13:46:37.593060  723278 pod_ready.go:86] duration metric: took 39.506081408s for pod "coredns-7d764666f9-qfkgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.595721  723278 pod_ready.go:83] waiting for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.600005  723278 pod_ready.go:94] pod "etcd-no-preload-992258" is "Ready"
	I1213 13:46:37.600027  723278 pod_ready.go:86] duration metric: took 4.283645ms for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.602349  723278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.606335  723278 pod_ready.go:94] pod "kube-apiserver-no-preload-992258" is "Ready"
	I1213 13:46:37.606353  723278 pod_ready.go:86] duration metric: took 3.985408ms for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.608278  723278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.793439  723278 pod_ready.go:94] pod "kube-controller-manager-no-preload-992258" is "Ready"
	I1213 13:46:37.793538  723278 pod_ready.go:86] duration metric: took 185.240657ms for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.993814  723278 pod_ready.go:83] waiting for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.391287  723278 pod_ready.go:94] pod "kube-proxy-sjrzk" is "Ready"
	I1213 13:46:38.391316  723278 pod_ready.go:86] duration metric: took 397.467202ms for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.592664  723278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991819  723278 pod_ready.go:94] pod "kube-scheduler-no-preload-992258" is "Ready"
	I1213 13:46:38.991855  723278 pod_ready.go:86] duration metric: took 399.165979ms for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991870  723278 pod_ready.go:40] duration metric: took 40.907684385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:39.055074  723278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:46:39.056693  723278 out.go:179] * Done! kubectl is now configured to use "no-preload-992258" cluster and "default" namespace by default
	I1213 13:46:37.744577  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:46:37.744596  730912 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:46:37.744659  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.769735  730912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.769842  730912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:46:37.769924  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.769942  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.773997  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.806607  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.885020  730912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:37.892323  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:37.901908  730912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:37.908074  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:46:37.908095  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:46:37.924625  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.926038  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:46:37.926060  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:46:37.942015  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:46:37.942038  730912 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:46:37.961315  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:46:37.961339  730912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:46:37.979600  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:46:37.979629  730912 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:46:38.003635  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:46:38.003660  730912 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:46:38.019334  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:46:38.019359  730912 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:46:38.036465  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:46:38.036507  730912 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:46:38.053804  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:38.053835  730912 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:46:38.071650  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:39.597072  730912 node_ready.go:49] node "default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:39.597127  730912 node_ready.go:38] duration metric: took 1.695171527s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:39.597146  730912 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:39.597331  730912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:40.220696  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.328338683s)
	I1213 13:46:40.220801  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.296116857s)
	I1213 13:46:40.220919  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.149240842s)
	I1213 13:46:40.221000  730912 api_server.go:72] duration metric: took 2.51244991s to wait for apiserver process to appear ...
	I1213 13:46:40.221052  730912 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:40.221075  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.223057  730912 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-038239 addons enable metrics-server
	
	I1213 13:46:40.226524  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.226548  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:40.228246  730912 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:46:40.229402  730912 addons.go:530] duration metric: took 2.520798966s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1213 13:46:37.552331  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:39.558845  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:39.050825  734452 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:46:39.051127  734452 start.go:159] libmachine.API.Create for "newest-cni-362964" (driver="docker")
	I1213 13:46:39.051170  734452 client.go:173] LocalClient.Create starting
	I1213 13:46:39.051291  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:46:39.051338  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051367  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051431  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:46:39.051459  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051478  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051941  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:46:39.074137  734452 cli_runner.go:211] docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:46:39.074224  734452 network_create.go:284] running [docker network inspect newest-cni-362964] to gather additional debugging logs...
	I1213 13:46:39.074248  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964
	W1213 13:46:39.102273  734452 cli_runner.go:211] docker network inspect newest-cni-362964 returned with exit code 1
	I1213 13:46:39.102343  734452 network_create.go:287] error running [docker network inspect newest-cni-362964]: docker network inspect newest-cni-362964: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-362964 not found
	I1213 13:46:39.102377  734452 network_create.go:289] output of [docker network inspect newest-cni-362964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-362964 not found
	
	** /stderr **
	I1213 13:46:39.102549  734452 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:39.122483  734452 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:46:39.123444  734452 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:46:39.124137  734452 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:46:39.125173  734452 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8a30}
	I1213 13:46:39.125201  734452 network_create.go:124] attempt to create docker network newest-cni-362964 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:46:39.125260  734452 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-362964 newest-cni-362964
	I1213 13:46:39.179901  734452 network_create.go:108] docker network newest-cni-362964 192.168.76.0/24 created
	I1213 13:46:39.179928  734452 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-362964" container
	I1213 13:46:39.179979  734452 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:46:39.213973  734452 cli_runner.go:164] Run: docker volume create newest-cni-362964 --label name.minikube.sigs.k8s.io=newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:46:39.235544  734452 oci.go:103] Successfully created a docker volume newest-cni-362964
	I1213 13:46:39.235642  734452 cli_runner.go:164] Run: docker run --rm --name newest-cni-362964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --entrypoint /usr/bin/test -v newest-cni-362964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:46:39.751588  734452 oci.go:107] Successfully prepared a docker volume newest-cni-362964
	I1213 13:46:39.751676  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.751688  734452 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:46:39.751766  734452 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:46:40.721469  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.727005  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.727036  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:41.221758  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:41.227300  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1213 13:46:41.228302  730912 api_server.go:141] control plane version: v1.34.2
	I1213 13:46:41.228325  730912 api_server.go:131] duration metric: took 1.007264269s to wait for apiserver health ...
	I1213 13:46:41.228334  730912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:46:41.231822  730912 system_pods.go:59] 8 kube-system pods found
	I1213 13:46:41.231857  730912 system_pods.go:61] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.231869  730912 system_pods.go:61] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.231876  730912 system_pods.go:61] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.231882  730912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.231891  730912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.231897  730912 system_pods.go:61] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.231905  730912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.231912  730912 system_pods.go:61] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.231923  730912 system_pods.go:74] duration metric: took 3.580887ms to wait for pod list to return data ...
	I1213 13:46:41.231936  730912 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:46:41.234505  730912 default_sa.go:45] found service account: "default"
	I1213 13:46:41.234528  730912 default_sa.go:55] duration metric: took 2.585513ms for default service account to be created ...
	I1213 13:46:41.234537  730912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:46:41.237182  730912 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:41.237209  730912 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.237220  730912 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.237227  730912 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.237236  730912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.237245  730912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.237253  730912 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.237261  730912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.237271  730912 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.237279  730912 system_pods.go:126] duration metric: took 2.735704ms to wait for k8s-apps to be running ...
	I1213 13:46:41.237288  730912 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:46:41.237331  730912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:41.250597  730912 system_svc.go:56] duration metric: took 13.296933ms WaitForService to wait for kubelet
	I1213 13:46:41.250630  730912 kubeadm.go:587] duration metric: took 3.542081461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:41.250655  730912 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:46:41.254078  730912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:46:41.254103  730912 node_conditions.go:123] node cpu capacity is 8
	I1213 13:46:41.254126  730912 node_conditions.go:105] duration metric: took 3.462529ms to run NodePressure ...
	I1213 13:46:41.254141  730912 start.go:242] waiting for startup goroutines ...
	I1213 13:46:41.254155  730912 start.go:247] waiting for cluster config update ...
	I1213 13:46:41.254174  730912 start.go:256] writing updated cluster config ...
	I1213 13:46:41.254482  730912 ssh_runner.go:195] Run: rm -f paused
	I1213 13:46:41.258509  730912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:41.262286  730912 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 13:46:43.315769  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:42.051398  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:44.558674  726383 pod_ready.go:94] pod "coredns-66bc5c9577-bl59n" is "Ready"
	I1213 13:46:44.558713  726383 pod_ready.go:86] duration metric: took 32.012951382s for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.561144  726383 pod_ready.go:83] waiting for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.565899  726383 pod_ready.go:94] pod "etcd-embed-certs-973953" is "Ready"
	I1213 13:46:44.565923  726383 pod_ready.go:86] duration metric: took 4.7423ms for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.568261  726383 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.572565  726383 pod_ready.go:94] pod "kube-apiserver-embed-certs-973953" is "Ready"
	I1213 13:46:44.572592  726383 pod_ready.go:86] duration metric: took 4.304087ms for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.575031  726383 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.750453  726383 pod_ready.go:94] pod "kube-controller-manager-embed-certs-973953" is "Ready"
	I1213 13:46:44.750489  726383 pod_ready.go:86] duration metric: took 175.430643ms for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.951317  726383 pod_ready.go:83] waiting for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.350477  726383 pod_ready.go:94] pod "kube-proxy-jqcpv" is "Ready"
	I1213 13:46:45.350507  726383 pod_ready.go:86] duration metric: took 399.159038ms for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.550818  726383 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950357  726383 pod_ready.go:94] pod "kube-scheduler-embed-certs-973953" is "Ready"
	I1213 13:46:45.950385  726383 pod_ready.go:86] duration metric: took 399.541821ms for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950396  726383 pod_ready.go:40] duration metric: took 33.408030209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:46.003877  726383 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:46:46.006266  726383 out.go:179] * Done! kubectl is now configured to use "embed-certs-973953" cluster and "default" namespace by default
	I1213 13:46:43.827925  734452 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.0760512s)
	I1213 13:46:43.827966  734452 kic.go:203] duration metric: took 4.076273522s to extract preloaded images to volume ...
	W1213 13:46:43.828063  734452 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:46:43.828111  734452 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:46:43.828160  734452 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:46:43.885693  734452 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-362964 --name newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-362964 --network newest-cni-362964 --ip 192.168.76.2 --volume newest-cni-362964:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:46:44.183753  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Running}}
	I1213 13:46:44.203369  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.223422  734452 cli_runner.go:164] Run: docker exec newest-cni-362964 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:46:44.277034  734452 oci.go:144] the created container "newest-cni-362964" has a running status.
	I1213 13:46:44.277064  734452 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa...
	I1213 13:46:44.344914  734452 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:46:44.377198  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.402053  734452 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:46:44.402083  734452 kic_runner.go:114] Args: [docker exec --privileged newest-cni-362964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:46:44.478040  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.506931  734452 machine.go:94] provisionDockerMachine start ...
	I1213 13:46:44.507418  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:44.537001  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:44.537395  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:44.537427  734452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:46:44.538118  734452 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48464->127.0.0.1:33515: read: connection reset by peer
	I1213 13:46:47.689037  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.689072  734452 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:46:47.689140  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.712543  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.713000  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.713025  734452 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:46:47.873217  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.873318  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.896725  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.897081  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.897130  734452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:46:48.044203  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:46:48.044232  734452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:46:48.044289  734452 ubuntu.go:190] setting up certificates
	I1213 13:46:48.044304  734452 provision.go:84] configureAuth start
	I1213 13:46:48.044368  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.068662  734452 provision.go:143] copyHostCerts
	I1213 13:46:48.068728  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:46:48.068739  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:46:48.068879  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:46:48.069004  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:46:48.069048  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:46:48.069113  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:46:48.069294  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:46:48.069312  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:46:48.069355  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:46:48.069462  734452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:46:48.220174  734452 provision.go:177] copyRemoteCerts
	I1213 13:46:48.220240  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:46:48.220284  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.242055  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:48.348835  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:46:48.372845  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:46:48.394838  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:46:48.416450  734452 provision.go:87] duration metric: took 372.119155ms to configureAuth
	I1213 13:46:48.416488  734452 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:46:48.416718  734452 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:48.416935  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.438340  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:48.438572  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:48.438593  734452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:46:48.772615  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:46:48.772642  734452 machine.go:97] duration metric: took 4.265315999s to provisionDockerMachine
	I1213 13:46:48.772654  734452 client.go:176] duration metric: took 9.721476668s to LocalClient.Create
	I1213 13:46:48.772675  734452 start.go:167] duration metric: took 9.721549598s to libmachine.API.Create "newest-cni-362964"
	I1213 13:46:48.772685  734452 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:46:48.772700  734452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:46:48.772766  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:46:48.772846  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.796130  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	W1213 13:46:45.768717  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:48.269155  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:48.906093  734452 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:46:48.910767  734452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:46:48.910823  734452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:46:48.910839  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:46:48.910910  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:46:48.911037  734452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:46:48.911209  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:46:48.921911  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:48.947921  734452 start.go:296] duration metric: took 175.219125ms for postStartSetup
	I1213 13:46:48.948314  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.972402  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:48.972688  734452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:46:48.972732  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.995624  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.100377  734452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:46:49.106414  734452 start.go:128] duration metric: took 10.061800408s to createHost
	I1213 13:46:49.106444  734452 start.go:83] releasing machines lock for "newest-cni-362964", held for 10.062163513s
	I1213 13:46:49.106521  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:49.131359  734452 ssh_runner.go:195] Run: cat /version.json
	I1213 13:46:49.131430  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.131434  734452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:46:49.131534  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.155684  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.156118  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.345845  734452 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:49.354872  734452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:46:49.402808  734452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:46:49.408988  734452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:46:49.409066  734452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:46:49.440997  734452 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:46:49.441025  734452 start.go:496] detecting cgroup driver to use...
	I1213 13:46:49.441060  734452 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:46:49.441115  734452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:46:49.462316  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:46:49.477713  734452 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:46:49.477795  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:46:49.501648  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:46:49.526524  734452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:46:49.629504  734452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:46:49.728940  734452 docker.go:234] disabling docker service ...
	I1213 13:46:49.729008  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:46:49.751594  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:46:49.766407  734452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:46:49.855523  734452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:46:49.940562  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:46:49.953965  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:46:49.968209  734452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:46:49.968288  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.979551  734452 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:46:49.979626  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.988154  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.997026  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.005337  734452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:46:50.013019  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.021641  734452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.035024  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.043264  734452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:46:50.050409  734452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:46:50.057213  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:50.144700  734452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:46:51.023735  734452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:46:51.023835  734452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:46:51.028520  734452 start.go:564] Will wait 60s for crictl version
	I1213 13:46:51.028585  734452 ssh_runner.go:195] Run: which crictl
	I1213 13:46:51.032526  734452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:46:51.058397  734452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:46:51.058490  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.086747  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.117725  734452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:46:51.118756  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:51.138994  734452 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:46:51.143167  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.155706  734452 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:46:51.156802  734452 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:46:51.156953  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:51.157039  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.198200  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.198221  734452 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:46:51.198267  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.225683  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.225709  734452 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:46:51.225719  734452 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:46:51.225843  734452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:46:51.225940  734452 ssh_runner.go:195] Run: crio config
	I1213 13:46:51.273702  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:51.273722  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:51.273741  734452 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:46:51.273768  734452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:46:51.273951  734452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:46:51.274024  734452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:46:51.282302  734452 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:46:51.282376  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:46:51.290422  734452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:46:51.303253  734452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:46:51.318075  734452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:46:51.331214  734452 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:46:51.334976  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.345829  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:51.437080  734452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:51.461201  734452 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:46:51.461228  734452 certs.go:195] generating shared ca certs ...
	I1213 13:46:51.461258  734452 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.461456  734452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:46:51.461517  734452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:46:51.461535  734452 certs.go:257] generating profile certs ...
	I1213 13:46:51.461611  734452 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:46:51.461644  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt with IP's: []
	I1213 13:46:51.675129  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt ...
	I1213 13:46:51.675163  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt: {Name:mkfc2919111fa26d81b7191d3873ecc598936940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675356  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key ...
	I1213 13:46:51.675368  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key: {Name:mkcca4e2f19072f042ecc8cce95f891ff7bba521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675455  734452 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:46:51.675473  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 13:46:51.732537  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb ...
	I1213 13:46:51.732571  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb: {Name:mka68b1fc7336251712aa83c57233f6aaa26b56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732752  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb ...
	I1213 13:46:51.732766  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb: {Name:mk7b2188d2ac3de30be4a0ecf05771755b89586c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732898  734452 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt
	I1213 13:46:51.733002  734452 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key
	I1213 13:46:51.733072  734452 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:46:51.733091  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt with IP's: []
	I1213 13:46:51.768844  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt ...
	I1213 13:46:51.768876  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt: {Name:mk54ca537df717e699f15967f0763bc1a365ba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769051  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key ...
	I1213 13:46:51.769066  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key: {Name:mkc6731d5f061dd55c086b1529645fdd7e056a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769254  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:46:51.769294  734452 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:46:51.769306  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:46:51.769336  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:46:51.769363  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:46:51.769392  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:46:51.769438  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:51.770096  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:46:51.789179  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:46:51.807957  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:46:51.829246  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:46:51.849816  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:46:51.867382  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:46:51.884431  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:46:51.901499  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:46:51.918590  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:46:51.938587  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:46:51.956885  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:46:51.976711  734452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:46:51.990451  734452 ssh_runner.go:195] Run: openssl version
	I1213 13:46:51.996876  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.004771  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:46:52.013327  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017188  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017246  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.052182  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:46:52.060156  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:46:52.067555  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.074980  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:46:52.083293  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087008  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087060  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.121292  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.129202  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.136878  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.144894  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:46:52.152936  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156906  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156974  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.192626  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:46:52.200484  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:46:52.207749  734452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:46:52.211283  734452 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:46:52.211338  734452 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:52.211418  734452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:46:52.211486  734452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:46:52.238989  734452 cri.go:89] found id: ""
	I1213 13:46:52.239071  734452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:46:52.248678  734452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:46:52.257209  734452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:46:52.257267  734452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:46:52.265205  734452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:46:52.265226  734452 kubeadm.go:158] found existing configuration files:
	
	I1213 13:46:52.265280  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:46:52.273379  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:46:52.273433  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:46:52.280768  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:46:52.288560  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:46:52.288610  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:46:52.296093  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.303964  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:46:52.304023  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.311559  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:46:52.320197  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:46:52.320257  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:46:52.334065  734452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:46:52.371455  734452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 13:46:52.371571  734452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:46:52.442098  734452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:46:52.442200  734452 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:46:52.442255  734452 kubeadm.go:319] OS: Linux
	I1213 13:46:52.442323  734452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:46:52.442390  734452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:46:52.442455  734452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:46:52.442512  734452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:46:52.442578  734452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:46:52.442697  734452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:46:52.442826  734452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:46:52.442969  734452 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:46:52.508064  734452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:46:52.508249  734452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:46:52.508406  734452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:46:52.516288  734452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:46:52.519224  734452 out.go:252]   - Generating certificates and keys ...
	I1213 13:46:52.519355  734452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:46:52.519493  734452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:46:52.532097  734452 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:46:52.698464  734452 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:46:52.742997  734452 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:46:52.834618  734452 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:46:52.947440  734452 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:46:52.947607  734452 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.014857  734452 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:46:53.015046  734452 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.141370  734452 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:46:53.236321  734452 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:46:53.329100  734452 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:46:53.329196  734452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:46:53.418157  734452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:46:53.508241  734452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:46:53.569616  734452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:46:53.618621  734452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:46:53.646993  734452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:46:53.647697  734452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:46:53.651749  734452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:46:53.653114  734452 out.go:252]   - Booting up control plane ...
	I1213 13:46:53.653242  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:46:53.653571  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:46:53.654959  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:46:53.677067  734452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:46:53.677242  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:46:53.684167  734452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:46:53.684396  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:46:53.684462  734452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:46:53.802893  734452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:46:53.803078  734452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 13:46:50.770133  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:53.268827  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:54.304306  734452 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.553214ms
	I1213 13:46:54.307257  734452 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:46:54.307404  734452 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 13:46:54.307545  734452 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:46:54.307659  734452 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:46:54.813847  734452 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 506.428042ms
	I1213 13:46:56.298840  734452 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.990746279s
	I1213 13:46:57.809364  734452 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502006023s
	I1213 13:46:57.828589  734452 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:46:57.839273  734452 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:46:57.849857  734452 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:46:57.850169  734452 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-362964 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:46:57.859609  734452 kubeadm.go:319] [bootstrap-token] Using token: wuq8cx.vg81wzcp5d3gm8z3
	I1213 13:46:57.861437  734452 out.go:252]   - Configuring RBAC rules ...
	I1213 13:46:57.861592  734452 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:46:57.864478  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:46:57.870588  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:46:57.873470  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:46:57.877170  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:46:57.880076  734452 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:46:58.219051  734452 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:46:58.659077  734452 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:46:59.216148  734452 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:46:59.217382  734452 kubeadm.go:319] 
	I1213 13:46:59.217471  734452 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:46:59.217510  734452 kubeadm.go:319] 
	I1213 13:46:59.217666  734452 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:46:59.217685  734452 kubeadm.go:319] 
	I1213 13:46:59.217725  734452 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:46:59.217827  734452 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:46:59.217929  734452 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:46:59.217945  734452 kubeadm.go:319] 
	I1213 13:46:59.218020  734452 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:46:59.218034  734452 kubeadm.go:319] 
	I1213 13:46:59.218103  734452 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:46:59.218116  734452 kubeadm.go:319] 
	I1213 13:46:59.218206  734452 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:46:59.218322  734452 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:46:59.218423  734452 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:46:59.218435  734452 kubeadm.go:319] 
	I1213 13:46:59.218551  734452 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:46:59.218673  734452 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:46:59.218689  734452 kubeadm.go:319] 
	I1213 13:46:59.218845  734452 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wuq8cx.vg81wzcp5d3gm8z3 \
	I1213 13:46:59.218984  734452 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 \
	I1213 13:46:59.219028  734452 kubeadm.go:319] 	--control-plane 
	I1213 13:46:59.219044  734452 kubeadm.go:319] 
	I1213 13:46:59.219172  734452 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:46:59.219183  734452 kubeadm.go:319] 
	I1213 13:46:59.219331  734452 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wuq8cx.vg81wzcp5d3gm8z3 \
	I1213 13:46:59.219511  734452 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 
	I1213 13:46:59.221855  734452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:46:59.222014  734452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:46:59.222057  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:59.222071  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:59.232701  734452 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1213 13:46:55.768853  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:58.272372  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 13:46:22 embed-certs-973953 crio[566]: time="2025-12-13T13:46:22.353109422Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 13:46:22 embed-certs-973953 crio[566]: time="2025-12-13T13:46:22.619370228Z" level=info msg="Removing container: 1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690" id=a2fe3143-ebe0-4347-a441-91c82c4810fd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:22 embed-certs-973953 crio[566]: time="2025-12-13T13:46:22.630863527Z" level=info msg="Removed container 1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=a2fe3143-ebe0-4347-a441-91c82c4810fd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.551416746Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d8f01804-e321-436b-9817-8e812cbddb50 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.552341885Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cecf08ee-1c3c-4162-b1a8-28e21e43dce4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.553386246Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=9426bf20-07aa-4c70-9a60-85464d455823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.553531798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.55933211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.559883616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.588662174Z" level=info msg="Created container 88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=9426bf20-07aa-4c70-9a60-85464d455823 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.589538732Z" level=info msg="Starting container: 88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c" id=187c2d48-42c3-4fde-9518-be570d480a8b name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.592302327Z" level=info msg="Started container" PID=1777 containerID=88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper id=187c2d48-42c3-4fde-9518-be570d480a8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=89aa4101d58c72d9ed51b2f1cc864f2a088df528dcacd14be9ee59fa3a1aa29e
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.667239682Z" level=info msg="Removing container: 6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff" id=0433495a-2dfa-49e1-ab2c-4211c3e40c7d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:40 embed-certs-973953 crio[566]: time="2025-12-13T13:46:40.680942781Z" level=info msg="Removed container 6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh/dashboard-metrics-scraper" id=0433495a-2dfa-49e1-ab2c-4211c3e40c7d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.675121793Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2e52d7d2-efde-4eda-92c8-0e6c8ba839a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.759515941Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=babc7d15-1b16-45bd-be3a-3dad4c805663 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.760860716Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=09770379-3dab-4ddd-bab9-7c8618e5fece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.761014803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.882175484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.88236839Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/106bfe52c3a294748f7f195cb23ce6639daeb58e637d52c70a8bb6ef9c6890dc/merged/etc/passwd: no such file or directory"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.882404788Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/106bfe52c3a294748f7f195cb23ce6639daeb58e637d52c70a8bb6ef9c6890dc/merged/etc/group: no such file or directory"
	Dec 13 13:46:42 embed-certs-973953 crio[566]: time="2025-12-13T13:46:42.882675131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:46:43 embed-certs-973953 crio[566]: time="2025-12-13T13:46:43.091283438Z" level=info msg="Created container efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d: kube-system/storage-provisioner/storage-provisioner" id=09770379-3dab-4ddd-bab9-7c8618e5fece name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:46:43 embed-certs-973953 crio[566]: time="2025-12-13T13:46:43.092011446Z" level=info msg="Starting container: efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d" id=e91c7fb1-cf73-48cd-9ac5-c388d39e25aa name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:46:43 embed-certs-973953 crio[566]: time="2025-12-13T13:46:43.094012093Z" level=info msg="Started container" PID=1793 containerID=efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d description=kube-system/storage-provisioner/storage-provisioner id=e91c7fb1-cf73-48cd-9ac5-c388d39e25aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=fcd212653d7f1e61b2f20aca2226d5b2653f8a8685e61b85fbd95da675a2ccf3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	efd7617437d9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   fcd212653d7f1       storage-provisioner                          kube-system
	88fa874dcae8e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   89aa4101d58c7       dashboard-metrics-scraper-6ffb444bf9-rdwkh   kubernetes-dashboard
	829175c211a73       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   d733563635a79       kubernetes-dashboard-855c9754f9-9zb5p        kubernetes-dashboard
	ee4a5f8af0e37       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   b3ab42e749eaa       busybox                                      default
	7c492a13369dc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   b5ce6d268bd4b       coredns-66bc5c9577-bl59n                     kube-system
	a3bd12ac5959f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   8a62fc7018533       kube-proxy-jqcpv                             kube-system
	6c555c3d5d969       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   19a849590ec40       kindnet-bw5d4                                kube-system
	25179d237bb92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   fcd212653d7f1       storage-provisioner                          kube-system
	ca59722508ee8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   3fff06e172c61       kube-apiserver-embed-certs-973953            kube-system
	63a2ba4a5a1d9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   ffee6fdd91e44       etcd-embed-certs-973953                      kube-system
	447b95afd76fc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   f8734f1314361       kube-scheduler-embed-certs-973953            kube-system
	628ec34c6d25d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   aa08751bd0ac2       kube-controller-manager-embed-certs-973953   kube-system
	
	
	==> coredns [7c492a13369dcfd1ee3f016e954fbecf54508fa7ba80fcd6015ec64cf928a302] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42517 - 33976 "HINFO IN 9206848693974487165.7963474350790246474. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.175046976s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-973953
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-973953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=embed-certs-973953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_45_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-973953
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:46:41 +0000   Sat, 13 Dec 2025 13:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-973953
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                03ac64dc-35d6-4a73-b891-f77762e89392
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-bl59n                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-973953                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-bw5d4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-973953             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-973953    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-jqcpv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-973953             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rdwkh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9zb5p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-973953 event: Registered Node embed-certs-973953 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-973953 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-973953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-973953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-973953 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node embed-certs-973953 event: Registered Node embed-certs-973953 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [63a2ba4a5a1d996ff60a23b991b5a0cfa5dc9703b1f26e1efb01ad5545a6e669] <==
	{"level":"warn","ts":"2025-12-13T13:46:10.343800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.350165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.358002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.367581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.375429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.383173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.390622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.397313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.405340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.412667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.419656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.426657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.434219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.441923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.448926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.455263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.462215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.468764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.476712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.483454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.497764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.504381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.511334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:10.561682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:46:42.879604Z","caller":"traceutil/trace.go:172","msg":"trace[689963425] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"197.848307ms","start":"2025-12-13T13:46:42.681736Z","end":"2025-12-13T13:46:42.879584Z","steps":["trace[689963425] 'process raft request'  (duration: 196.921937ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:47:03 up  2:29,  0 user,  load average: 5.49, 4.32, 2.75
	Linux embed-certs-973953 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6c555c3d5d969e912b7a13fc6ea032d9b5037a541f10e177ed9f435d13f5bf08] <==
	I1213 13:46:12.129631       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:46:12.129891       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 13:46:12.130082       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:46:12.130106       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:46:12.130130       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:46:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:46:12.331876       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:46:12.331913       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:46:12.331930       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:46:12.332229       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:46:12.698401       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:46:12.698443       1 metrics.go:72] Registering metrics
	I1213 13:46:12.698522       1 controller.go:711] "Syncing nftables rules"
	I1213 13:46:22.331595       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:22.331689       1 main.go:301] handling current node
	I1213 13:46:32.338894       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:32.338920       1 main.go:301] handling current node
	I1213 13:46:42.331904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:42.331936       1 main.go:301] handling current node
	I1213 13:46:52.334248       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:46:52.334320       1 main.go:301] handling current node
	I1213 13:47:02.340860       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 13:47:02.340912       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca59722508ee8428d337934b1ea258c96ebcf5e6b597926df8e7c55eb6a97674] <==
	I1213 13:46:11.029940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:46:11.029947       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:46:11.031011       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 13:46:11.031278       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 13:46:11.031476       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:46:11.031911       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:46:11.031989       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 13:46:11.032069       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 13:46:11.032137       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 13:46:11.031522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:46:11.039567       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:46:11.048481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:46:11.058386       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:46:11.324375       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:46:11.352065       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:46:11.370069       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:46:11.376117       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:46:11.382753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:46:11.416104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.37.235"}
	I1213 13:46:11.424989       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.230.164"}
	I1213 13:46:11.934095       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:46:14.536883       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:14.788601       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:46:14.788686       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:46:14.938334       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [628ec34c6d25dfe03110c51ea75cc04af49fd848dda5cc30d4f2618ba82a847e] <==
	I1213 13:46:14.368041       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:46:14.369225       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 13:46:14.371567       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 13:46:14.384062       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 13:46:14.384089       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 13:46:14.384114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:46:14.384123       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:46:14.384125       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:46:14.384147       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:46:14.384287       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 13:46:14.384362       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 13:46:14.384458       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 13:46:14.384387       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 13:46:14.384467       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 13:46:14.384586       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 13:46:14.384750       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 13:46:14.384912       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-973953"
	I1213 13:46:14.384965       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 13:46:14.387750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 13:46:14.387821       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 13:46:14.388154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 13:46:14.389027       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 13:46:14.389075       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:46:14.391931       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 13:46:14.411221       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a3bd12ac5959fa76ebe71bcd6e4bce6459412f36c9ca3212eaeb9f821e6a2c7e] <==
	I1213 13:46:11.956577       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:46:12.029063       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:46:12.129920       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:46:12.129960       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 13:46:12.130102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:46:12.151038       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:46:12.151099       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:46:12.156148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:46:12.156514       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:46:12.156546       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:12.157822       1 config.go:200] "Starting service config controller"
	I1213 13:46:12.157845       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:46:12.157878       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:46:12.157885       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:46:12.157877       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:46:12.157900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:46:12.158069       1 config.go:309] "Starting node config controller"
	I1213 13:46:12.158085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:46:12.158093       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:46:12.258030       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:46:12.258095       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:46:12.258444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [447b95afd76fcddb599b0f25dc7d2ae95263bb9a7ac29ae570889adee6a816b5] <==
	I1213 13:46:09.765852       1 serving.go:386] Generated self-signed cert in-memory
	W1213 13:46:10.963158       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 13:46:10.963207       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:46:10.963220       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 13:46:10.963230       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 13:46:10.989882       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 13:46:10.989914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:10.992936       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:10.992986       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:10.992997       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:46:10.993069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:46:11.094179       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975556     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nbd\" (UniqueName: \"kubernetes.io/projected/d2365a41-1a6e-44b7-9890-47de6820efdf-kube-api-access-w4nbd\") pod \"dashboard-metrics-scraper-6ffb444bf9-rdwkh\" (UID: \"d2365a41-1a6e-44b7-9890-47de6820efdf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh"
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975631     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c8daaac-7546-4f7d-a09c-a667c2a384b7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9zb5p\" (UID: \"4c8daaac-7546-4f7d-a09c-a667c2a384b7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zb5p"
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975661     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2cm8\" (UniqueName: \"kubernetes.io/projected/4c8daaac-7546-4f7d-a09c-a667c2a384b7-kube-api-access-m2cm8\") pod \"kubernetes-dashboard-855c9754f9-9zb5p\" (UID: \"4c8daaac-7546-4f7d-a09c-a667c2a384b7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zb5p"
	Dec 13 13:46:14 embed-certs-973953 kubelet[730]: I1213 13:46:14.975687     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d2365a41-1a6e-44b7-9890-47de6820efdf-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rdwkh\" (UID: \"d2365a41-1a6e-44b7-9890-47de6820efdf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh"
	Dec 13 13:46:19 embed-certs-973953 kubelet[730]: I1213 13:46:19.825434     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zb5p" podStartSLOduration=2.594730658 podStartE2EDuration="5.825412413s" podCreationTimestamp="2025-12-13 13:46:14 +0000 UTC" firstStartedPulling="2025-12-13 13:46:15.18513591 +0000 UTC m=+6.723315431" lastFinishedPulling="2025-12-13 13:46:18.415817653 +0000 UTC m=+9.953997186" observedRunningTime="2025-12-13 13:46:18.620738391 +0000 UTC m=+10.158917932" watchObservedRunningTime="2025-12-13 13:46:19.825412413 +0000 UTC m=+11.363591955"
	Dec 13 13:46:21 embed-certs-973953 kubelet[730]: I1213 13:46:21.261167     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podStartSLOduration=1.89337065 podStartE2EDuration="7.261145577s" podCreationTimestamp="2025-12-13 13:46:14 +0000 UTC" firstStartedPulling="2025-12-13 13:46:15.185312005 +0000 UTC m=+6.723491538" lastFinishedPulling="2025-12-13 13:46:20.553086945 +0000 UTC m=+12.091266465" observedRunningTime="2025-12-13 13:46:20.619017278 +0000 UTC m=+12.157196818" watchObservedRunningTime="2025-12-13 13:46:21.261145577 +0000 UTC m=+12.799325119"
	Dec 13 13:46:21 embed-certs-973953 kubelet[730]: I1213 13:46:21.613747     730 scope.go:117] "RemoveContainer" containerID="1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690"
	Dec 13 13:46:22 embed-certs-973953 kubelet[730]: I1213 13:46:22.618018     730 scope.go:117] "RemoveContainer" containerID="1b82dfc5a703e53976a3918ab50dc1d000d9437ab7b427384f4df9aab69e1690"
	Dec 13 13:46:22 embed-certs-973953 kubelet[730]: I1213 13:46:22.618182     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:22 embed-certs-973953 kubelet[730]: E1213 13:46:22.618395     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:23 embed-certs-973953 kubelet[730]: I1213 13:46:23.622360     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:23 embed-certs-973953 kubelet[730]: E1213 13:46:23.622599     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:28 embed-certs-973953 kubelet[730]: I1213 13:46:28.626956     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:28 embed-certs-973953 kubelet[730]: E1213 13:46:28.627129     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: I1213 13:46:40.550907     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: I1213 13:46:40.665634     730 scope.go:117] "RemoveContainer" containerID="6d493ab7a549d2081851a1702e3fb9cb2ec842dba70edd2880a8abaa9d0c2fff"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: I1213 13:46:40.665929     730 scope.go:117] "RemoveContainer" containerID="88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	Dec 13 13:46:40 embed-certs-973953 kubelet[730]: E1213 13:46:40.666175     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:42 embed-certs-973953 kubelet[730]: I1213 13:46:42.674629     730 scope.go:117] "RemoveContainer" containerID="25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072"
	Dec 13 13:46:48 embed-certs-973953 kubelet[730]: I1213 13:46:48.626876     730 scope.go:117] "RemoveContainer" containerID="88fa874dcae8ecbde6c678ae8ef9b5c71b4742998a3c98d303aeed286a42e98c"
	Dec 13 13:46:48 embed-certs-973953 kubelet[730]: E1213 13:46:48.627109     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rdwkh_kubernetes-dashboard(d2365a41-1a6e-44b7-9890-47de6820efdf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rdwkh" podUID="d2365a41-1a6e-44b7-9890-47de6820efdf"
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:46:58 embed-certs-973953 systemd[1]: kubelet.service: Consumed 1.626s CPU time.
	
	
	==> kubernetes-dashboard [829175c211a730469b696ac526ac2cf801bcf3f3786e55f7b59979ffe20b709e] <==
	2025/12/13 13:46:18 Starting overwatch
	2025/12/13 13:46:18 Using namespace: kubernetes-dashboard
	2025/12/13 13:46:18 Using in-cluster config to connect to apiserver
	2025/12/13 13:46:18 Using secret token for csrf signing
	2025/12/13 13:46:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:46:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:46:18 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 13:46:18 Generating JWE encryption key
	2025/12/13 13:46:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:46:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:46:18 Initializing JWE encryption key from synchronized object
	2025/12/13 13:46:18 Creating in-cluster Sidecar client
	2025/12/13 13:46:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:18 Serving insecurely on HTTP port: 9090
	2025/12/13 13:46:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [25179d237bb92b28ed06c458b55b40813c605ade462e0315ffbf3dd6a5233072] <==
	I1213 13:46:11.922972       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:46:41.926842       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [efd7617437d9c5becbcfe2a0765d7577e574de74c964f48ef5f0c61f98e15c5d] <==
	I1213 13:46:43.792130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:46:43.799522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:46:43.799632       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:46:43.803424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:47.258751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:51.519508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:55.117725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:46:58.171671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:01.194570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:01.198970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:01.199154       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:47:01.199267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f4fe34c-f4c1-4423-bf81-d96ad4a8dd1c", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-973953_8954321e-a4e4-4205-a6ba-883a28ddd10f became leader
	I1213 13:47:01.199315       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-973953_8954321e-a4e4-4205-a6ba-883a28ddd10f!
	W1213 13:47:01.201211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:01.204467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:01.299590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-973953_8954321e-a4e4-4205-a6ba-883a28ddd10f!
	W1213 13:47:03.215807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:03.226892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-973953 -n embed-certs-973953
E1213 13:47:03.651208  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:47:03.732743  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:47:03.895030  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-973953 -n embed-certs-973953: exit status 2 (342.464085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-973953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.581233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-362964
helpers_test.go:244: (dbg) docker inspect newest-cni-362964:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41",
	        "Created": "2025-12-13T13:46:43.902071196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 735448,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:46:43.935129359Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/hosts",
	        "LogPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41-json.log",
	        "Name": "/newest-cni-362964",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-362964:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-362964",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41",
	                "LowerDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-362964",
	                "Source": "/var/lib/docker/volumes/newest-cni-362964/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-362964",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-362964",
	                "name.minikube.sigs.k8s.io": "newest-cni-362964",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8c4df336993082d8cc75233e7e1d47d410ebd928872eeb079459777e978d02f0",
	            "SandboxKey": "/var/run/docker/netns/8c4df3369930",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-362964": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a5cddf8c31ff9d3f9d9f694626ad7e5d879d33f2650fe55e248b8c0b8c028028",
	                    "EndpointID": "136b4c63bf03baae7fa2194f0985037d6185c314750b3d254ffbcd2b6d04603e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "66:2b:c4:19:67:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-362964",
	                        "a8feb9db9236"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-362964 logs -n 25
E1213 13:47:05.890994  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:47:06.140705  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-362964 logs -n 25: (1.010785862s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ stop    │ -p no-preload-992258 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-973953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │                     │
	│ stop    │ -p embed-certs-973953 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:45 UTC │
	│ start   │ -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:45 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:46:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:46:38.807259  734452 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:46:38.807356  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807364  734452 out.go:374] Setting ErrFile to fd 2...
	I1213 13:46:38.807368  734452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:46:38.807581  734452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:46:38.808124  734452 out.go:368] Setting JSON to false
	I1213 13:46:38.809505  734452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8947,"bootTime":1765624652,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:46:38.809572  734452 start.go:143] virtualization: kvm guest
	I1213 13:46:38.811798  734452 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:46:38.813823  734452 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:46:38.813876  734452 notify.go:221] Checking for updates...
	I1213 13:46:38.816262  734452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:46:38.817585  734452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:46:38.818693  734452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:46:38.820057  734452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:46:38.821335  734452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:46:38.823198  734452 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823338  734452 config.go:182] Loaded profile config "embed-certs-973953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:46:38.823469  734452 config.go:182] Loaded profile config "no-preload-992258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:38.823581  734452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:46:38.861614  734452 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:46:38.861761  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:38.931148  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.919230241 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:38.931318  734452 docker.go:319] overlay module found
	I1213 13:46:38.933289  734452 out.go:179] * Using the docker driver based on user configuration
	I1213 13:46:38.934577  734452 start.go:309] selected driver: docker
	I1213 13:46:38.934599  734452 start.go:927] validating driver "docker" against <nil>
	I1213 13:46:38.934616  734452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:46:38.935491  734452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:46:39.004706  734452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:46:38.992987781 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:46:39.004928  734452 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 13:46:39.004966  734452 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 13:46:39.005271  734452 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:46:39.007551  734452 out.go:179] * Using Docker driver with root privileges
	I1213 13:46:39.008611  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:39.008719  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:39.008737  734452 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 13:46:39.008854  734452 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:39.010974  734452 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:46:39.012247  734452 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:46:39.013645  734452 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:46:39.016856  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.016895  734452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:46:39.016914  734452 cache.go:65] Caching tarball of preloaded images
	I1213 13:46:39.016962  734452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:46:39.017009  734452 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:46:39.017022  734452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:46:39.017144  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:39.017168  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json: {Name:mk03f8124fe1745099f3d3cb3fe7fe5ae5e6b929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:39.044079  734452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:46:39.044103  734452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:46:39.044123  734452 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:46:39.044162  734452 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:46:39.044269  734452 start.go:364] duration metric: took 87.606µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:46:39.044501  734452 start.go:93] Provisioning new machine with config: &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:46:39.044595  734452 start.go:125] createHost starting for "" (driver="docker")
	I1213 13:46:37.593032  723278 pod_ready.go:94] pod "coredns-7d764666f9-qfkgp" is "Ready"
	I1213 13:46:37.593060  723278 pod_ready.go:86] duration metric: took 39.506081408s for pod "coredns-7d764666f9-qfkgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.595721  723278 pod_ready.go:83] waiting for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.600005  723278 pod_ready.go:94] pod "etcd-no-preload-992258" is "Ready"
	I1213 13:46:37.600027  723278 pod_ready.go:86] duration metric: took 4.283645ms for pod "etcd-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.602349  723278 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.606335  723278 pod_ready.go:94] pod "kube-apiserver-no-preload-992258" is "Ready"
	I1213 13:46:37.606353  723278 pod_ready.go:86] duration metric: took 3.985408ms for pod "kube-apiserver-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.608278  723278 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.793439  723278 pod_ready.go:94] pod "kube-controller-manager-no-preload-992258" is "Ready"
	I1213 13:46:37.793538  723278 pod_ready.go:86] duration metric: took 185.240657ms for pod "kube-controller-manager-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:37.993814  723278 pod_ready.go:83] waiting for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.391287  723278 pod_ready.go:94] pod "kube-proxy-sjrzk" is "Ready"
	I1213 13:46:38.391316  723278 pod_ready.go:86] duration metric: took 397.467202ms for pod "kube-proxy-sjrzk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.592664  723278 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991819  723278 pod_ready.go:94] pod "kube-scheduler-no-preload-992258" is "Ready"
	I1213 13:46:38.991855  723278 pod_ready.go:86] duration metric: took 399.165979ms for pod "kube-scheduler-no-preload-992258" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:38.991870  723278 pod_ready.go:40] duration metric: took 40.907684385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:39.055074  723278 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:46:39.056693  723278 out.go:179] * Done! kubectl is now configured to use "no-preload-992258" cluster and "default" namespace by default
	I1213 13:46:37.744577  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:46:37.744596  730912 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:46:37.744659  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.769735  730912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.769842  730912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:46:37.769924  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.769942  730912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:46:37.773997  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.806607  730912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:46:37.885020  730912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:37.892323  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:46:37.901908  730912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:37.908074  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:46:37.908095  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:46:37.924625  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:46:37.926038  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:46:37.926060  730912 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:46:37.942015  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:46:37.942038  730912 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:46:37.961315  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:46:37.961339  730912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:46:37.979600  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:46:37.979629  730912 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:46:38.003635  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:46:38.003660  730912 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:46:38.019334  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:46:38.019359  730912 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:46:38.036465  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:46:38.036507  730912 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:46:38.053804  730912 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:38.053835  730912 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:46:38.071650  730912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:46:39.597072  730912 node_ready.go:49] node "default-k8s-diff-port-038239" is "Ready"
	I1213 13:46:39.597127  730912 node_ready.go:38] duration metric: took 1.695171527s for node "default-k8s-diff-port-038239" to be "Ready" ...
	I1213 13:46:39.597146  730912 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:46:39.597331  730912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:46:40.220696  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.328338683s)
	I1213 13:46:40.220801  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.296116857s)
	I1213 13:46:40.220919  730912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.149240842s)
	I1213 13:46:40.221000  730912 api_server.go:72] duration metric: took 2.51244991s to wait for apiserver process to appear ...
	I1213 13:46:40.221052  730912 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:46:40.221075  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.223057  730912 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-038239 addons enable metrics-server
	
	I1213 13:46:40.226524  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.226548  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:40.228246  730912 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:46:40.229402  730912 addons.go:530] duration metric: took 2.520798966s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1213 13:46:37.552331  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	W1213 13:46:39.558845  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:39.050825  734452 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 13:46:39.051127  734452 start.go:159] libmachine.API.Create for "newest-cni-362964" (driver="docker")
	I1213 13:46:39.051170  734452 client.go:173] LocalClient.Create starting
	I1213 13:46:39.051291  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem
	I1213 13:46:39.051338  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051367  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051431  734452 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem
	I1213 13:46:39.051459  734452 main.go:143] libmachine: Decoding PEM data...
	I1213 13:46:39.051478  734452 main.go:143] libmachine: Parsing certificate...
	I1213 13:46:39.051941  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 13:46:39.074137  734452 cli_runner.go:211] docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 13:46:39.074224  734452 network_create.go:284] running [docker network inspect newest-cni-362964] to gather additional debugging logs...
	I1213 13:46:39.074248  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964
	W1213 13:46:39.102273  734452 cli_runner.go:211] docker network inspect newest-cni-362964 returned with exit code 1
	I1213 13:46:39.102343  734452 network_create.go:287] error running [docker network inspect newest-cni-362964]: docker network inspect newest-cni-362964: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-362964 not found
	I1213 13:46:39.102377  734452 network_create.go:289] output of [docker network inspect newest-cni-362964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-362964 not found
	
	** /stderr **
	I1213 13:46:39.102549  734452 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:39.122483  734452 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
	I1213 13:46:39.123444  734452 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b99c511b2851 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:f5:60:cf:cf:e0} reservation:<nil>}
	I1213 13:46:39.124137  734452 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8173e81c4a82 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:76:c5:9d:b0:f9} reservation:<nil>}
	I1213 13:46:39.125173  734452 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed8a30}
	I1213 13:46:39.125201  734452 network_create.go:124] attempt to create docker network newest-cni-362964 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 13:46:39.125260  734452 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-362964 newest-cni-362964
	I1213 13:46:39.179901  734452 network_create.go:108] docker network newest-cni-362964 192.168.76.0/24 created
	I1213 13:46:39.179928  734452 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-362964" container
	I1213 13:46:39.179979  734452 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 13:46:39.213973  734452 cli_runner.go:164] Run: docker volume create newest-cni-362964 --label name.minikube.sigs.k8s.io=newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true
	I1213 13:46:39.235544  734452 oci.go:103] Successfully created a docker volume newest-cni-362964
	I1213 13:46:39.235642  734452 cli_runner.go:164] Run: docker run --rm --name newest-cni-362964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --entrypoint /usr/bin/test -v newest-cni-362964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 13:46:39.751588  734452 oci.go:107] Successfully prepared a docker volume newest-cni-362964
	I1213 13:46:39.751676  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:39.751688  734452 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 13:46:39.751766  734452 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 13:46:40.721469  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:40.727005  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:46:40.727036  730912 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:46:41.221758  730912 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1213 13:46:41.227300  730912 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1213 13:46:41.228302  730912 api_server.go:141] control plane version: v1.34.2
	I1213 13:46:41.228325  730912 api_server.go:131] duration metric: took 1.007264269s to wait for apiserver health ...
	I1213 13:46:41.228334  730912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:46:41.231822  730912 system_pods.go:59] 8 kube-system pods found
	I1213 13:46:41.231857  730912 system_pods.go:61] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.231869  730912 system_pods.go:61] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.231876  730912 system_pods.go:61] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.231882  730912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.231891  730912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.231897  730912 system_pods.go:61] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.231905  730912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.231912  730912 system_pods.go:61] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.231923  730912 system_pods.go:74] duration metric: took 3.580887ms to wait for pod list to return data ...
	I1213 13:46:41.231936  730912 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:46:41.234505  730912 default_sa.go:45] found service account: "default"
	I1213 13:46:41.234528  730912 default_sa.go:55] duration metric: took 2.585513ms for default service account to be created ...
	I1213 13:46:41.234537  730912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 13:46:41.237182  730912 system_pods.go:86] 8 kube-system pods found
	I1213 13:46:41.237209  730912 system_pods.go:89] "coredns-66bc5c9577-tzzmx" [980da903-c99d-4518-9ee3-7e5a96adec7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 13:46:41.237220  730912 system_pods.go:89] "etcd-default-k8s-diff-port-038239" [4281e3fe-09b2-4f4b-b735-e81d8f92611d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:46:41.237227  730912 system_pods.go:89] "kindnet-c65rs" [70da74c6-b3f7-4c93-830f-cd2e08c1a82b] Running
	I1213 13:46:41.237236  730912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-038239" [61e90c83-4a74-41da-af00-64ad96e831b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:46:41.237245  730912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-038239" [327b2203-201b-4496-b88d-085894210077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:46:41.237253  730912 system_pods.go:89] "kube-proxy-lzwfg" [706752fb-a589-4e6f-b710-228e3650dacd] Running
	I1213 13:46:41.237261  730912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-038239" [ae96dbde-d4ad-4db9-a9d4-dd56f9954d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:46:41.237271  730912 system_pods.go:89] "storage-provisioner" [ee84dbb0-2764-427e-aa74-2827e9ce9620] Running
	I1213 13:46:41.237279  730912 system_pods.go:126] duration metric: took 2.735704ms to wait for k8s-apps to be running ...
	I1213 13:46:41.237288  730912 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 13:46:41.237331  730912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:46:41.250597  730912 system_svc.go:56] duration metric: took 13.296933ms WaitForService to wait for kubelet
	I1213 13:46:41.250630  730912 kubeadm.go:587] duration metric: took 3.542081461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 13:46:41.250655  730912 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:46:41.254078  730912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:46:41.254103  730912 node_conditions.go:123] node cpu capacity is 8
	I1213 13:46:41.254126  730912 node_conditions.go:105] duration metric: took 3.462529ms to run NodePressure ...
	I1213 13:46:41.254141  730912 start.go:242] waiting for startup goroutines ...
	I1213 13:46:41.254155  730912 start.go:247] waiting for cluster config update ...
	I1213 13:46:41.254174  730912 start.go:256] writing updated cluster config ...
	I1213 13:46:41.254482  730912 ssh_runner.go:195] Run: rm -f paused
	I1213 13:46:41.258509  730912 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:41.262286  730912 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzzmx" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 13:46:43.315769  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:42.051398  726383 pod_ready.go:104] pod "coredns-66bc5c9577-bl59n" is not "Ready", error: <nil>
	I1213 13:46:44.558674  726383 pod_ready.go:94] pod "coredns-66bc5c9577-bl59n" is "Ready"
	I1213 13:46:44.558713  726383 pod_ready.go:86] duration metric: took 32.012951382s for pod "coredns-66bc5c9577-bl59n" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.561144  726383 pod_ready.go:83] waiting for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.565899  726383 pod_ready.go:94] pod "etcd-embed-certs-973953" is "Ready"
	I1213 13:46:44.565923  726383 pod_ready.go:86] duration metric: took 4.7423ms for pod "etcd-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.568261  726383 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.572565  726383 pod_ready.go:94] pod "kube-apiserver-embed-certs-973953" is "Ready"
	I1213 13:46:44.572592  726383 pod_ready.go:86] duration metric: took 4.304087ms for pod "kube-apiserver-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.575031  726383 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.750453  726383 pod_ready.go:94] pod "kube-controller-manager-embed-certs-973953" is "Ready"
	I1213 13:46:44.750489  726383 pod_ready.go:86] duration metric: took 175.430643ms for pod "kube-controller-manager-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:44.951317  726383 pod_ready.go:83] waiting for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.350477  726383 pod_ready.go:94] pod "kube-proxy-jqcpv" is "Ready"
	I1213 13:46:45.350507  726383 pod_ready.go:86] duration metric: took 399.159038ms for pod "kube-proxy-jqcpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.550818  726383 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950357  726383 pod_ready.go:94] pod "kube-scheduler-embed-certs-973953" is "Ready"
	I1213 13:46:45.950385  726383 pod_ready.go:86] duration metric: took 399.541821ms for pod "kube-scheduler-embed-certs-973953" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 13:46:45.950396  726383 pod_ready.go:40] duration metric: took 33.408030209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 13:46:46.003877  726383 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 13:46:46.006266  726383 out.go:179] * Done! kubectl is now configured to use "embed-certs-973953" cluster and "default" namespace by default
	I1213 13:46:43.827925  734452 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-362964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.0760512s)
	I1213 13:46:43.827966  734452 kic.go:203] duration metric: took 4.076273522s to extract preloaded images to volume ...
	W1213 13:46:43.828063  734452 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 13:46:43.828111  734452 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 13:46:43.828160  734452 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 13:46:43.885693  734452 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-362964 --name newest-cni-362964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-362964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-362964 --network newest-cni-362964 --ip 192.168.76.2 --volume newest-cni-362964:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 13:46:44.183753  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Running}}
	I1213 13:46:44.203369  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.223422  734452 cli_runner.go:164] Run: docker exec newest-cni-362964 stat /var/lib/dpkg/alternatives/iptables
	I1213 13:46:44.277034  734452 oci.go:144] the created container "newest-cni-362964" has a running status.
	I1213 13:46:44.277064  734452 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa...
	I1213 13:46:44.344914  734452 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 13:46:44.377198  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.402053  734452 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 13:46:44.402083  734452 kic_runner.go:114] Args: [docker exec --privileged newest-cni-362964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 13:46:44.478040  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:46:44.506931  734452 machine.go:94] provisionDockerMachine start ...
	I1213 13:46:44.507418  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:44.537001  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:44.537395  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:44.537427  734452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:46:44.538118  734452 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48464->127.0.0.1:33515: read: connection reset by peer
	I1213 13:46:47.689037  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.689072  734452 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:46:47.689140  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.712543  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.713000  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.713025  734452 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:46:47.873217  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:46:47.873318  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:47.896725  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:47.897081  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:47.897130  734452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:46:48.044203  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:46:48.044232  734452 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:46:48.044289  734452 ubuntu.go:190] setting up certificates
	I1213 13:46:48.044304  734452 provision.go:84] configureAuth start
	I1213 13:46:48.044368  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.068662  734452 provision.go:143] copyHostCerts
	I1213 13:46:48.068728  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:46:48.068739  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:46:48.068879  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:46:48.069004  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:46:48.069048  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:46:48.069113  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:46:48.069294  734452 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:46:48.069312  734452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:46:48.069355  734452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:46:48.069462  734452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:46:48.220174  734452 provision.go:177] copyRemoteCerts
	I1213 13:46:48.220240  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:46:48.220284  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.242055  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:48.348835  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:46:48.372845  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:46:48.394838  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:46:48.416450  734452 provision.go:87] duration metric: took 372.119155ms to configureAuth
	I1213 13:46:48.416488  734452 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:46:48.416718  734452 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:46:48.416935  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.438340  734452 main.go:143] libmachine: Using SSH client type: native
	I1213 13:46:48.438572  734452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33515 <nil> <nil>}
	I1213 13:46:48.438593  734452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:46:48.772615  734452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:46:48.772642  734452 machine.go:97] duration metric: took 4.265315999s to provisionDockerMachine
	I1213 13:46:48.772654  734452 client.go:176] duration metric: took 9.721476668s to LocalClient.Create
	I1213 13:46:48.772675  734452 start.go:167] duration metric: took 9.721549598s to libmachine.API.Create "newest-cni-362964"
	I1213 13:46:48.772685  734452 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:46:48.772700  734452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:46:48.772766  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:46:48.772846  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.796130  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	W1213 13:46:45.768717  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:48.269155  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:48.906093  734452 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:46:48.910767  734452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:46:48.910823  734452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:46:48.910839  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:46:48.910910  734452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:46:48.911037  734452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:46:48.911209  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:46:48.921911  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:48.947921  734452 start.go:296] duration metric: took 175.219125ms for postStartSetup
	I1213 13:46:48.948314  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:48.972402  734452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:46:48.972688  734452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:46:48.972732  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:48.995624  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.100377  734452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:46:49.106414  734452 start.go:128] duration metric: took 10.061800408s to createHost
	I1213 13:46:49.106444  734452 start.go:83] releasing machines lock for "newest-cni-362964", held for 10.062163513s
	I1213 13:46:49.106521  734452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:46:49.131359  734452 ssh_runner.go:195] Run: cat /version.json
	I1213 13:46:49.131430  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.131434  734452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:46:49.131534  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:46:49.155684  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.156118  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:46:49.345845  734452 ssh_runner.go:195] Run: systemctl --version
	I1213 13:46:49.354872  734452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:46:49.402808  734452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:46:49.408988  734452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:46:49.409066  734452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:46:49.440997  734452 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 13:46:49.441025  734452 start.go:496] detecting cgroup driver to use...
	I1213 13:46:49.441060  734452 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:46:49.441115  734452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:46:49.462316  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:46:49.477713  734452 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:46:49.477795  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:46:49.501648  734452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:46:49.526524  734452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:46:49.629504  734452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:46:49.728940  734452 docker.go:234] disabling docker service ...
	I1213 13:46:49.729008  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:46:49.751594  734452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:46:49.766407  734452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:46:49.855523  734452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:46:49.940562  734452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:46:49.953965  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:46:49.968209  734452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:46:49.968288  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.979551  734452 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:46:49.979626  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.988154  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:49.997026  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.005337  734452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:46:50.013019  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.021641  734452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.035024  734452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:46:50.043264  734452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:46:50.050409  734452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:46:50.057213  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:50.144700  734452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:46:51.023735  734452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:46:51.023835  734452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:46:51.028520  734452 start.go:564] Will wait 60s for crictl version
	I1213 13:46:51.028585  734452 ssh_runner.go:195] Run: which crictl
	I1213 13:46:51.032526  734452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:46:51.058397  734452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:46:51.058490  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.086747  734452 ssh_runner.go:195] Run: crio --version
	I1213 13:46:51.117725  734452 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:46:51.118756  734452 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:46:51.138994  734452 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:46:51.143167  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.155706  734452 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:46:51.156802  734452 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:46:51.156953  734452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:46:51.157039  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.198200  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.198221  734452 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:46:51.198267  734452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:46:51.225683  734452 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:46:51.225709  734452 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:46:51.225719  734452 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:46:51.225843  734452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:46:51.225940  734452 ssh_runner.go:195] Run: crio config
	I1213 13:46:51.273702  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:51.273722  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:51.273741  734452 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:46:51.273768  734452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:46:51.273951  734452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:46:51.274024  734452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:46:51.282302  734452 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:46:51.282376  734452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:46:51.290422  734452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:46:51.303253  734452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:46:51.318075  734452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:46:51.331214  734452 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:46:51.334976  734452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:46:51.345829  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:46:51.437080  734452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:46:51.461201  734452 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:46:51.461228  734452 certs.go:195] generating shared ca certs ...
	I1213 13:46:51.461258  734452 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.461456  734452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:46:51.461517  734452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:46:51.461535  734452 certs.go:257] generating profile certs ...
	I1213 13:46:51.461611  734452 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:46:51.461644  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt with IP's: []
	I1213 13:46:51.675129  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt ...
	I1213 13:46:51.675163  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.crt: {Name:mkfc2919111fa26d81b7191d3873ecc598936940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675356  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key ...
	I1213 13:46:51.675368  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key: {Name:mkcca4e2f19072f042ecc8cce95f891ff7bba521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.675455  734452 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:46:51.675473  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 13:46:51.732537  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb ...
	I1213 13:46:51.732571  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb: {Name:mka68b1fc7336251712aa83c57233f6aaa26b56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732752  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb ...
	I1213 13:46:51.732766  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb: {Name:mk7b2188d2ac3de30be4a0ecf05771755b89586c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.732898  734452 certs.go:382] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt
	I1213 13:46:51.733002  734452 certs.go:386] copying /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb -> /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key
	I1213 13:46:51.733072  734452 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:46:51.733091  734452 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt with IP's: []
	I1213 13:46:51.768844  734452 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt ...
	I1213 13:46:51.768876  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt: {Name:mk54ca537df717e699f15967f0763bc1a365ba7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769051  734452 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key ...
	I1213 13:46:51.769066  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key: {Name:mkc6731d5f061dd55c086b1529645fdd7e056a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:46:51.769254  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:46:51.769294  734452 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:46:51.769306  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:46:51.769336  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:46:51.769363  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:46:51.769392  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:46:51.769438  734452 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:46:51.770096  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:46:51.789179  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:46:51.807957  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:46:51.829246  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:46:51.849816  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:46:51.867382  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:46:51.884431  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:46:51.901499  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:46:51.918590  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:46:51.938587  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:46:51.956885  734452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:46:51.976711  734452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:46:51.990451  734452 ssh_runner.go:195] Run: openssl version
	I1213 13:46:51.996876  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.004771  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:46:52.013327  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017188  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.017246  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:46:52.052182  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:46:52.060156  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/394130.pem /etc/ssl/certs/51391683.0
	I1213 13:46:52.067555  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.074980  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:46:52.083293  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087008  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.087060  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:46:52.121292  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.129202  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3941302.pem /etc/ssl/certs/3ec20f2e.0
	I1213 13:46:52.136878  734452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.144894  734452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:46:52.152936  734452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156906  734452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.156974  734452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:46:52.192626  734452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:46:52.200484  734452 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 13:46:52.207749  734452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:46:52.211283  734452 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 13:46:52.211338  734452 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:46:52.211418  734452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:46:52.211486  734452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:46:52.238989  734452 cri.go:89] found id: ""
	I1213 13:46:52.239071  734452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:46:52.248678  734452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 13:46:52.257209  734452 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 13:46:52.257267  734452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 13:46:52.265205  734452 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 13:46:52.265226  734452 kubeadm.go:158] found existing configuration files:
	
	I1213 13:46:52.265280  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 13:46:52.273379  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 13:46:52.273433  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 13:46:52.280768  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 13:46:52.288560  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 13:46:52.288610  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 13:46:52.296093  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.303964  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 13:46:52.304023  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 13:46:52.311559  734452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 13:46:52.320197  734452 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 13:46:52.320257  734452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 13:46:52.334065  734452 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 13:46:52.371455  734452 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 13:46:52.371571  734452 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 13:46:52.442098  734452 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 13:46:52.442200  734452 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 13:46:52.442255  734452 kubeadm.go:319] OS: Linux
	I1213 13:46:52.442323  734452 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 13:46:52.442390  734452 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 13:46:52.442455  734452 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 13:46:52.442512  734452 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 13:46:52.442578  734452 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 13:46:52.442697  734452 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 13:46:52.442826  734452 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 13:46:52.442969  734452 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 13:46:52.508064  734452 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 13:46:52.508249  734452 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 13:46:52.508406  734452 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 13:46:52.516288  734452 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 13:46:52.519224  734452 out.go:252]   - Generating certificates and keys ...
	I1213 13:46:52.519355  734452 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 13:46:52.519493  734452 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 13:46:52.532097  734452 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 13:46:52.698464  734452 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 13:46:52.742997  734452 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 13:46:52.834618  734452 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 13:46:52.947440  734452 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 13:46:52.947607  734452 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.014857  734452 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 13:46:53.015046  734452 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-362964] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 13:46:53.141370  734452 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 13:46:53.236321  734452 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 13:46:53.329100  734452 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 13:46:53.329196  734452 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 13:46:53.418157  734452 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 13:46:53.508241  734452 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 13:46:53.569616  734452 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 13:46:53.618621  734452 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 13:46:53.646993  734452 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 13:46:53.647697  734452 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 13:46:53.651749  734452 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 13:46:53.653114  734452 out.go:252]   - Booting up control plane ...
	I1213 13:46:53.653242  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 13:46:53.653571  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 13:46:53.654959  734452 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 13:46:53.677067  734452 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 13:46:53.677242  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 13:46:53.684167  734452 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 13:46:53.684396  734452 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 13:46:53.684462  734452 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 13:46:53.802893  734452 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 13:46:53.803078  734452 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 13:46:50.770133  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:53.268827  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:54.304306  734452 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.553214ms
	I1213 13:46:54.307257  734452 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 13:46:54.307404  734452 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 13:46:54.307545  734452 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 13:46:54.307659  734452 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 13:46:54.813847  734452 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 506.428042ms
	I1213 13:46:56.298840  734452 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.990746279s
	I1213 13:46:57.809364  734452 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502006023s
	I1213 13:46:57.828589  734452 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 13:46:57.839273  734452 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 13:46:57.849857  734452 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 13:46:57.850169  734452 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-362964 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 13:46:57.859609  734452 kubeadm.go:319] [bootstrap-token] Using token: wuq8cx.vg81wzcp5d3gm8z3
	I1213 13:46:57.861437  734452 out.go:252]   - Configuring RBAC rules ...
	I1213 13:46:57.861592  734452 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 13:46:57.864478  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 13:46:57.870588  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 13:46:57.873470  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 13:46:57.877170  734452 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 13:46:57.880076  734452 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 13:46:58.219051  734452 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 13:46:58.659077  734452 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 13:46:59.216148  734452 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 13:46:59.217382  734452 kubeadm.go:319] 
	I1213 13:46:59.217471  734452 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 13:46:59.217510  734452 kubeadm.go:319] 
	I1213 13:46:59.217666  734452 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 13:46:59.217685  734452 kubeadm.go:319] 
	I1213 13:46:59.217725  734452 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 13:46:59.217827  734452 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 13:46:59.217929  734452 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 13:46:59.217945  734452 kubeadm.go:319] 
	I1213 13:46:59.218020  734452 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 13:46:59.218034  734452 kubeadm.go:319] 
	I1213 13:46:59.218103  734452 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 13:46:59.218116  734452 kubeadm.go:319] 
	I1213 13:46:59.218206  734452 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 13:46:59.218322  734452 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 13:46:59.218423  734452 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 13:46:59.218435  734452 kubeadm.go:319] 
	I1213 13:46:59.218551  734452 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 13:46:59.218673  734452 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 13:46:59.218689  734452 kubeadm.go:319] 
	I1213 13:46:59.218845  734452 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wuq8cx.vg81wzcp5d3gm8z3 \
	I1213 13:46:59.218984  734452 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 \
	I1213 13:46:59.219028  734452 kubeadm.go:319] 	--control-plane 
	I1213 13:46:59.219044  734452 kubeadm.go:319] 
	I1213 13:46:59.219172  734452 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 13:46:59.219183  734452 kubeadm.go:319] 
	I1213 13:46:59.219331  734452 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wuq8cx.vg81wzcp5d3gm8z3 \
	I1213 13:46:59.219511  734452 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ef8a7d1add12598ce2ec2dab13c01ff0d42437969bb9f662810a30bd819ab8f9 
	I1213 13:46:59.221855  734452 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 13:46:59.222014  734452 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 13:46:59.222057  734452 cni.go:84] Creating CNI manager for ""
	I1213 13:46:59.222071  734452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:46:59.232701  734452 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1213 13:46:55.768853  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:46:58.272372  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	I1213 13:46:59.233945  734452 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 13:46:59.238824  734452 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1213 13:46:59.238847  734452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 13:46:59.251766  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 13:46:59.507332  734452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 13:46:59.507448  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-362964 minikube.k8s.io/updated_at=2025_12_13T13_46_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=newest-cni-362964 minikube.k8s.io/primary=true
	I1213 13:46:59.507499  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:46:59.518941  734452 ops.go:34] apiserver oom_adj: -16
	I1213 13:46:59.598939  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:00.099996  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:00.599639  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:01.099539  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:01.599984  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:02.099427  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:02.599908  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:03.099950  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:03.600002  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:04.099741  734452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 13:47:04.186844  734452 kubeadm.go:1114] duration metric: took 4.679481246s to wait for elevateKubeSystemPrivileges
	I1213 13:47:04.186888  734452 kubeadm.go:403] duration metric: took 11.975552133s to StartCluster
	I1213 13:47:04.186911  734452 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:04.186980  734452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:04.189150  734452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:04.189387  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 13:47:04.189393  734452 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:47:04.189484  734452 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:47:04.189589  734452 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:04.189606  734452 addons.go:70] Setting default-storageclass=true in profile "newest-cni-362964"
	I1213 13:47:04.189639  734452 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-362964"
	I1213 13:47:04.189590  734452 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-362964"
	I1213 13:47:04.189735  734452 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-362964"
	I1213 13:47:04.189771  734452 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:04.190132  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:04.190266  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:04.191771  734452 out.go:179] * Verifying Kubernetes components...
	I1213 13:47:04.192917  734452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:04.213995  734452 addons.go:239] Setting addon default-storageclass=true in "newest-cni-362964"
	I1213 13:47:04.214032  734452 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:04.214340  734452 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:04.215411  734452 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:47:04.216824  734452 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:04.216845  734452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:47:04.216917  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:04.241958  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:04.243444  734452 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:04.244009  734452 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:47:04.244149  734452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:04.269246  734452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33515 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:04.283264  734452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 13:47:04.347523  734452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:04.370501  734452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:04.383054  734452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:04.465270  734452 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 13:47:04.466856  734452 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:47:04.466921  734452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:47:04.698284  734452 api_server.go:72] duration metric: took 508.857688ms to wait for apiserver process to appear ...
	I1213 13:47:04.698313  734452 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:47:04.698335  734452 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:04.703430  734452 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 13:47:04.704242  734452 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 13:47:04.704267  734452 api_server.go:131] duration metric: took 5.946265ms to wait for apiserver health ...
	I1213 13:47:04.704278  734452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:47:04.704621  734452 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 13:47:04.705635  734452 addons.go:530] duration metric: took 516.153349ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 13:47:04.707006  734452 system_pods.go:59] 8 kube-system pods found
	I1213 13:47:04.707033  734452 system_pods.go:61] "coredns-7d764666f9-rqktl" [7c70d7d0-5139-4893-905c-0e183495035e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:04.707040  734452 system_pods.go:61] "etcd-newest-cni-362964" [49d03570-d59e-4e95-902f-1994733e6009] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:47:04.707048  734452 system_pods.go:61] "kindnet-qk8dn" [0df822e7-da1c-43ee-9a1e-b2131ae84e50] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:47:04.707052  734452 system_pods.go:61] "kube-apiserver-newest-cni-362964" [31c7799d-0188-4e2f-8d32-eb6e3ffe29ae] Running
	I1213 13:47:04.707058  734452 system_pods.go:61] "kube-controller-manager-newest-cni-362964" [cee82184-0e71-4dfb-8851-d642f2716578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:47:04.707066  734452 system_pods.go:61] "kube-proxy-97cpx" [c081628a-7cdd-4b8c-9d28-9d95707c6064] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:47:04.707070  734452 system_pods.go:61] "kube-scheduler-newest-cni-362964" [d160f41f-e904-4d11-9b2c-157bfcbc668f] Running
	I1213 13:47:04.707080  734452 system_pods.go:61] "storage-provisioner" [b6d4689e-b3f1-496d-bfd4-11cb93ea7c15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:04.707087  734452 system_pods.go:74] duration metric: took 2.803546ms to wait for pod list to return data ...
	I1213 13:47:04.707096  734452 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:47:04.709160  734452 default_sa.go:45] found service account: "default"
	I1213 13:47:04.709184  734452 default_sa.go:55] duration metric: took 2.081496ms for default service account to be created ...
	I1213 13:47:04.709199  734452 kubeadm.go:587] duration metric: took 519.780876ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:04.709228  734452 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:47:04.711185  734452 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:47:04.711206  734452 node_conditions.go:123] node cpu capacity is 8
	I1213 13:47:04.711219  734452 node_conditions.go:105] duration metric: took 1.982129ms to run NodePressure ...
	I1213 13:47:04.711228  734452 start.go:242] waiting for startup goroutines ...
	I1213 13:47:04.971862  734452 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-362964" context rescaled to 1 replicas
	I1213 13:47:04.971910  734452 start.go:247] waiting for cluster config update ...
	I1213 13:47:04.971926  734452 start.go:256] writing updated cluster config ...
	I1213 13:47:04.972233  734452 ssh_runner.go:195] Run: rm -f paused
	I1213 13:47:05.021376  734452 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:47:05.023730  734452 out.go:179] * Done! kubectl is now configured to use "newest-cni-362964" cluster and "default" namespace by default
	W1213 13:47:00.767923  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:47:02.768600  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	W1213 13:47:05.270393  730912 pod_ready.go:104] pod "coredns-66bc5c9577-tzzmx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.808518267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.810834197Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=32ed49ec-fea1-4c04-b79f-3f5c6fb64a28 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.811140162Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=50f6b770-bb49-429a-8701-99c915e6aacb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.81222749Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.81262526Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.813106372Z" level=info msg="Ran pod sandbox 9eed392ffc194f5c102b54804664f331dd2174709ba08407c1ad182f3bafbe60 with infra container: kube-system/kube-proxy-97cpx/POD" id=32ed49ec-fea1-4c04-b79f-3f5c6fb64a28 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.814242132Z" level=info msg="Ran pod sandbox 70fcb80a3a7e0d17c9ffd3699d904fbc289533ce9516cb5e1325ad204f007b6d with infra container: kube-system/kindnet-qk8dn/POD" id=50f6b770-bb49-429a-8701-99c915e6aacb name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.81534774Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1e7f52e0-2f31-4606-abfb-7211953b47a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.815384284Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f9bf4497-63f8-4844-aacc-4fa9d0223322 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.816305907Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=45d5ed37-657e-42ef-ba5e-32720a7c48f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.816321287Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=07e8c3b7-9161-484d-ad43-f81f8181b28b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.820211354Z" level=info msg="Creating container: kube-system/kube-proxy-97cpx/kube-proxy" id=f708303a-51ea-48c5-b2ea-85e00989d6f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.820335342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.821530305Z" level=info msg="Creating container: kube-system/kindnet-qk8dn/kindnet-cni" id=fc7784ec-298c-4990-8eaf-7401a410eedd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.821598435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.824526038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.825011931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.825924245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.826291975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.851219713Z" level=info msg="Created container 9cd342badb06c906370d0b79ae22eafe0db96b13473e34e9db360aa3ab52393b: kube-system/kindnet-qk8dn/kindnet-cni" id=fc7784ec-298c-4990-8eaf-7401a410eedd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.851925625Z" level=info msg="Starting container: 9cd342badb06c906370d0b79ae22eafe0db96b13473e34e9db360aa3ab52393b" id=6ec1f16b-9103-4dd2-9042-6b6e0d739721 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.852518966Z" level=info msg="Created container 93c88eef5a4b288adeabdac753f56666ecd4646d3ae3523ab3965511ee191e3d: kube-system/kube-proxy-97cpx/kube-proxy" id=f708303a-51ea-48c5-b2ea-85e00989d6f6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.853100076Z" level=info msg="Starting container: 93c88eef5a4b288adeabdac753f56666ecd4646d3ae3523ab3965511ee191e3d" id=6c08ca36-0b29-4b6c-886a-3ee793cf1a86 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.853661729Z" level=info msg="Started container" PID=1604 containerID=9cd342badb06c906370d0b79ae22eafe0db96b13473e34e9db360aa3ab52393b description=kube-system/kindnet-qk8dn/kindnet-cni id=6ec1f16b-9103-4dd2-9042-6b6e0d739721 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70fcb80a3a7e0d17c9ffd3699d904fbc289533ce9516cb5e1325ad204f007b6d
	Dec 13 13:47:04 newest-cni-362964 crio[775]: time="2025-12-13T13:47:04.856147073Z" level=info msg="Started container" PID=1603 containerID=93c88eef5a4b288adeabdac753f56666ecd4646d3ae3523ab3965511ee191e3d description=kube-system/kube-proxy-97cpx/kube-proxy id=6c08ca36-0b29-4b6c-886a-3ee793cf1a86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9eed392ffc194f5c102b54804664f331dd2174709ba08407c1ad182f3bafbe60
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9cd342badb06c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   70fcb80a3a7e0       kindnet-qk8dn                               kube-system
	93c88eef5a4b2       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   9eed392ffc194       kube-proxy-97cpx                            kube-system
	be2fe5061a266       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   11 seconds ago      Running             etcd                      0                   27aa576f13a08       etcd-newest-cni-362964                      kube-system
	0f2465ffa5616       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   11 seconds ago      Running             kube-controller-manager   0                   eb5aeee81ca82       kube-controller-manager-newest-cni-362964   kube-system
	bdc1f3a1a5278       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   11 seconds ago      Running             kube-scheduler            0                   48fb06487aeac       kube-scheduler-newest-cni-362964            kube-system
	5e1098f46353f       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   11 seconds ago      Running             kube-apiserver            0                   0f71a6ed3af80       kube-apiserver-newest-cni-362964            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-362964
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-362964
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=newest-cni-362964
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_46_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:46:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-362964
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:46:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:46:58 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:46:58 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:46:58 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 13 Dec 2025 13:46:58 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-362964
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                7984a389-2b8f-4f40-bc98-e167ef24613c
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-362964                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-qk8dn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-362964             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-362964    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-97cpx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-362964             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-362964 event: Registered Node newest-cni-362964 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [be2fe5061a266e35e3050ba52cea3bef3555ad5661aa96c75d727a217ec2e9ac] <==
	{"level":"warn","ts":"2025-12-13T13:46:55.562318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.569827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.579181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.587050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.594341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.601753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.609764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.616830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.624685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.632103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.639917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.647862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.653991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.661056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.668512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.676423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.684038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.691044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.697886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.705315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.721042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.724825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.733104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.740358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:55.748262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41144","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:47:06 up  2:29,  0 user,  load average: 5.37, 4.31, 2.76
	Linux newest-cni-362964 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9cd342badb06c906370d0b79ae22eafe0db96b13473e34e9db360aa3ab52393b] <==
	I1213 13:47:05.039280       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:47:05.039519       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 13:47:05.039674       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:47:05.039693       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:47:05.039715       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:47:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:47:05.242045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:47:05.242069       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:47:05.242081       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:47:05.242310       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:47:05.642674       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:47:05.642705       1 metrics.go:72] Registering metrics
	I1213 13:47:05.642762       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5e1098f46353faef3c52fed5f177465a1aef4073a7c82fa9c517517e2c92dcd5] <==
	I1213 13:46:56.331995       1 policy_source.go:248] refreshing policies
	E1213 13:46:56.336046       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1213 13:46:56.358139       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1213 13:46:56.405329       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:46:56.410120       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1213 13:46:56.410495       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:46:56.415799       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:46:56.539530       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:46:57.208703       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1213 13:46:57.212562       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1213 13:46:57.212585       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 13:46:57.678491       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:46:57.718345       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:46:57.814822       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 13:46:57.821924       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1213 13:46:57.823219       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:46:57.827550       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:58.231249       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:46:58.630095       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:46:58.654919       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 13:46:58.669267       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 13:47:03.883564       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 13:47:04.083763       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:47:04.184906       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:47:04.188475       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0f2465ffa561632a2f9e23eca257d64c551786bbad7a0e9ee21df2c53defa6ee] <==
	I1213 13:47:03.045908       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.045985       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.046217       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.046249       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.046296       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.046322       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.046503       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.049667       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.049710       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.050633       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.050763       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.050810       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.051131       1 range_allocator.go:177] "Sending events to api server"
	I1213 13:47:03.050830       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.051178       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 13:47:03.051185       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:03.051190       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.050887       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.050840       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.051637       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.057524       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-362964" podCIDRs=["10.42.0.0/24"]
	I1213 13:47:03.138495       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:03.138515       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:47:03.138522       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 13:47:03.142397       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [93c88eef5a4b288adeabdac753f56666ecd4646d3ae3523ab3965511ee191e3d] <==
	I1213 13:47:04.893819       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:47:04.967652       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:05.067910       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:05.067956       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1213 13:47:05.068066       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:47:05.087588       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:47:05.087663       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:47:05.093420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:47:05.093787       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:47:05.093822       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:47:05.096903       1 config.go:200] "Starting service config controller"
	I1213 13:47:05.097363       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:47:05.096930       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:47:05.097403       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:47:05.096945       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:47:05.097416       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:47:05.097050       1 config.go:309] "Starting node config controller"
	I1213 13:47:05.097434       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:47:05.097440       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:47:05.197944       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:47:05.197960       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:47:05.197977       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bdc1f3a1a5278bb4b887089e4fa5fe98535a74506a3bff5dafde11c3e09fb5cd] <==
	E1213 13:46:57.103279       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1213 13:46:57.104519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 13:46:57.118170       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 13:46:57.119304       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 13:46:57.169884       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1213 13:46:57.170954       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 13:46:57.211409       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:46:57.212514       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 13:46:57.242675       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1213 13:46:57.243594       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 13:46:57.323320       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1213 13:46:57.324377       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 13:46:57.342490       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1213 13:46:57.343518       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1213 13:46:57.397170       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1213 13:46:57.398313       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 13:46:57.408507       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1213 13:46:57.409520       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 13:46:57.416842       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1213 13:46:57.417764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 13:46:57.419795       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 13:46:57.420650       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 13:46:57.455943       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 13:46:57.457034       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1213 13:46:57.891985       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 13:47:00 newest-cni-362964 kubelet[1312]: I1213 13:47:00.948596    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-362964" podStartSLOduration=2.9485762639999997 podStartE2EDuration="2.948576264s" podCreationTimestamp="2025-12-13 13:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:47:00.938268776 +0000 UTC m=+2.543380172" watchObservedRunningTime="2025-12-13 13:47:00.948576264 +0000 UTC m=+2.553687660"
	Dec 13 13:47:01 newest-cni-362964 kubelet[1312]: E1213 13:47:01.526160    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-362964" containerName="kube-controller-manager"
	Dec 13 13:47:01 newest-cni-362964 kubelet[1312]: E1213 13:47:01.534498    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-362964" containerName="kube-scheduler"
	Dec 13 13:47:01 newest-cni-362964 kubelet[1312]: I1213 13:47:01.537041    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-362964" podStartSLOduration=3.537022227 podStartE2EDuration="3.537022227s" podCreationTimestamp="2025-12-13 13:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:47:00.948813662 +0000 UTC m=+2.553925050" watchObservedRunningTime="2025-12-13 13:47:01.537022227 +0000 UTC m=+3.142133618"
	Dec 13 13:47:02 newest-cni-362964 kubelet[1312]: E1213 13:47:02.585155    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-362964" containerName="etcd"
	Dec 13 13:47:03 newest-cni-362964 kubelet[1312]: I1213 13:47:03.159006    1312 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 13 13:47:03 newest-cni-362964 kubelet[1312]: I1213 13:47:03.160260    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015602    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c081628a-7cdd-4b8c-9d28-9d95707c6064-xtables-lock\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015644    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c081628a-7cdd-4b8c-9d28-9d95707c6064-lib-modules\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015664    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-cni-cfg\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015682    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-lib-modules\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015763    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcv2\" (UniqueName: \"kubernetes.io/projected/0df822e7-da1c-43ee-9a1e-b2131ae84e50-kube-api-access-rdcv2\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015845    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c081628a-7cdd-4b8c-9d28-9d95707c6064-kube-proxy\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015880    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxrqr\" (UniqueName: \"kubernetes.io/projected/c081628a-7cdd-4b8c-9d28-9d95707c6064-kube-api-access-xxrqr\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: I1213 13:47:04.015894    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-xtables-lock\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: E1213 13:47:04.128785    1312 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: E1213 13:47:04.128833    1312 projected.go:196] Error preparing data for projected volume kube-api-access-rdcv2 for pod kube-system/kindnet-qk8dn: configmap "kube-root-ca.crt" not found
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: E1213 13:47:04.128970    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0df822e7-da1c-43ee-9a1e-b2131ae84e50-kube-api-access-rdcv2 podName:0df822e7-da1c-43ee-9a1e-b2131ae84e50 nodeName:}" failed. No retries permitted until 2025-12-13 13:47:04.628937105 +0000 UTC m=+6.234048495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rdcv2" (UniqueName: "kubernetes.io/projected/0df822e7-da1c-43ee-9a1e-b2131ae84e50-kube-api-access-rdcv2") pod "kindnet-qk8dn" (UID: "0df822e7-da1c-43ee-9a1e-b2131ae84e50") : configmap "kube-root-ca.crt" not found
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: E1213 13:47:04.133978    1312 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: E1213 13:47:04.134015    1312 projected.go:196] Error preparing data for projected volume kube-api-access-xxrqr for pod kube-system/kube-proxy-97cpx: configmap "kube-root-ca.crt" not found
	Dec 13 13:47:04 newest-cni-362964 kubelet[1312]: E1213 13:47:04.134098    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c081628a-7cdd-4b8c-9d28-9d95707c6064-kube-api-access-xxrqr podName:c081628a-7cdd-4b8c-9d28-9d95707c6064 nodeName:}" failed. No retries permitted until 2025-12-13 13:47:04.634073742 +0000 UTC m=+6.239185117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xxrqr" (UniqueName: "kubernetes.io/projected/c081628a-7cdd-4b8c-9d28-9d95707c6064-kube-api-access-xxrqr") pod "kube-proxy-97cpx" (UID: "c081628a-7cdd-4b8c-9d28-9d95707c6064") : configmap "kube-root-ca.crt" not found
	Dec 13 13:47:05 newest-cni-362964 kubelet[1312]: I1213 13:47:05.547652    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-qk8dn" podStartSLOduration=2.54763302 podStartE2EDuration="2.54763302s" podCreationTimestamp="2025-12-13 13:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:47:05.54747101 +0000 UTC m=+7.152582403" watchObservedRunningTime="2025-12-13 13:47:05.54763302 +0000 UTC m=+7.152744412"
	Dec 13 13:47:05 newest-cni-362964 kubelet[1312]: E1213 13:47:05.822656    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-362964" containerName="kube-scheduler"
	Dec 13 13:47:05 newest-cni-362964 kubelet[1312]: I1213 13:47:05.833655    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-97cpx" podStartSLOduration=2.833634884 podStartE2EDuration="2.833634884s" podCreationTimestamp="2025-12-13 13:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 13:47:05.556556663 +0000 UTC m=+7.161668055" watchObservedRunningTime="2025-12-13 13:47:05.833634884 +0000 UTC m=+7.438746277"
	Dec 13 13:47:06 newest-cni-362964 kubelet[1312]: E1213 13:47:06.400898    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-362964" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-362964 -n newest-cni-362964
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-362964 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-rqktl storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner: exit status 1 (62.769262ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-rqktl" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-362964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-362964 --alsologtostderr -v=1: exit status 80 (2.314204612s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-362964 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:47:26.058922  745810 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:26.059046  745810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:26.059058  745810 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:26.059065  745810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:26.059271  745810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:47:26.059512  745810 out.go:368] Setting JSON to false
	I1213 13:47:26.059532  745810 mustload.go:66] Loading cluster: newest-cni-362964
	I1213 13:47:26.059887  745810 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:26.060260  745810 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:26.077932  745810 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:26.078190  745810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:26.139832  745810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-13 13:47:26.129329895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:26.140687  745810 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-362964 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 13:47:26.143347  745810 out.go:179] * Pausing node newest-cni-362964 ... 
	I1213 13:47:26.144307  745810 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:26.144557  745810 ssh_runner.go:195] Run: systemctl --version
	I1213 13:47:26.144614  745810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:26.164887  745810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:26.266706  745810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:26.279112  745810 pause.go:52] kubelet running: true
	I1213 13:47:26.279179  745810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:26.418615  745810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:26.418716  745810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:26.488411  745810 cri.go:89] found id: "2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87"
	I1213 13:47:26.488433  745810 cri.go:89] found id: "af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937"
	I1213 13:47:26.488439  745810 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:26.488443  745810 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:26.488446  745810 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:26.488449  745810 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:26.488452  745810 cri.go:89] found id: ""
	I1213 13:47:26.488489  745810 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:26.500408  745810 retry.go:31] will retry after 192.261097ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:26Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:26.693851  745810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:26.706025  745810 pause.go:52] kubelet running: false
	I1213 13:47:26.706083  745810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:26.830992  745810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:26.831078  745810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:26.912418  745810 cri.go:89] found id: "2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87"
	I1213 13:47:26.912448  745810 cri.go:89] found id: "af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937"
	I1213 13:47:26.912454  745810 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:26.912458  745810 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:26.912479  745810 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:26.912484  745810 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:26.912488  745810 cri.go:89] found id: ""
	I1213 13:47:26.912537  745810 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:26.927740  745810 retry.go:31] will retry after 320.078029ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:26Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:27.248223  745810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:27.261113  745810 pause.go:52] kubelet running: false
	I1213 13:47:27.261181  745810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:27.387216  745810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:27.387299  745810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:27.462269  745810 cri.go:89] found id: "2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87"
	I1213 13:47:27.462294  745810 cri.go:89] found id: "af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937"
	I1213 13:47:27.462299  745810 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:27.462303  745810 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:27.462308  745810 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:27.462313  745810 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:27.462317  745810 cri.go:89] found id: ""
	I1213 13:47:27.462364  745810 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:27.481704  745810 retry.go:31] will retry after 620.700502ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:27Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:28.103577  745810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:28.116314  745810 pause.go:52] kubelet running: false
	I1213 13:47:28.116369  745810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:28.225437  745810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:28.225521  745810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:28.291015  745810 cri.go:89] found id: "2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87"
	I1213 13:47:28.291037  745810 cri.go:89] found id: "af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937"
	I1213 13:47:28.291043  745810 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:28.291050  745810 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:28.291054  745810 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:28.291059  745810 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:28.291063  745810 cri.go:89] found id: ""
	I1213 13:47:28.291117  745810 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:28.305124  745810 out.go:203] 
	W1213 13:47:28.306482  745810 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:47:28.306500  745810 out.go:285] * 
	* 
	W1213 13:47:28.311312  745810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:47:28.312625  745810 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-362964 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-362964
helpers_test.go:244: (dbg) docker inspect newest-cni-362964:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41",
	        "Created": "2025-12-13T13:46:43.902071196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743998,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:47:15.617130493Z",
	            "FinishedAt": "2025-12-13T13:47:14.7743152Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/hosts",
	        "LogPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41-json.log",
	        "Name": "/newest-cni-362964",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-362964:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-362964",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41",
	                "LowerDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-362964",
	                "Source": "/var/lib/docker/volumes/newest-cni-362964/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-362964",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-362964",
	                "name.minikube.sigs.k8s.io": "newest-cni-362964",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cb0f155dc1b5d3a688e5599a95e8671795650988558c901efb92d1fa0ade0db2",
	            "SandboxKey": "/var/run/docker/netns/cb0f155dc1b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-362964": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a5cddf8c31ff9d3f9d9f694626ad7e5d879d33f2650fe55e248b8c0b8c028028",
	                    "EndpointID": "82a8372c79df8eefcb4fb4110a08b13a7be607c464d25deaf5e1c2f64a34e91a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ae:a4:98:55:d5:2e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-362964",
	                        "a8feb9db9236"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964: exit status 2 (325.940009ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-362964 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ stop    │ -p newest-cni-362964 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-362964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ newest-cni-362964 image list --format=json                                                                                                                                                                                                           │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p newest-cni-362964 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ image   │ default-k8s-diff-port-038239 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p default-k8s-diff-port-038239 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:47:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:47:15.396688  743793 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:15.396994  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397008  743793 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:15.397013  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397213  743793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:47:15.397667  743793 out.go:368] Setting JSON to false
	I1213 13:47:15.398880  743793 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8983,"bootTime":1765624652,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:47:15.398951  743793 start.go:143] virtualization: kvm guest
	I1213 13:47:15.401034  743793 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:47:15.402467  743793 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:47:15.402473  743793 notify.go:221] Checking for updates...
	I1213 13:47:15.404652  743793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:47:15.406062  743793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:15.407288  743793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:47:15.408413  743793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:47:15.409475  743793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:47:15.411121  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:15.411682  743793 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:47:15.438362  743793 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:47:15.438448  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.493074  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.482438278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.493183  743793 docker.go:319] overlay module found
	I1213 13:47:15.494725  743793 out.go:179] * Using the docker driver based on existing profile
	I1213 13:47:15.495689  743793 start.go:309] selected driver: docker
	I1213 13:47:15.495700  743793 start.go:927] validating driver "docker" against &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.495792  743793 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:47:15.496338  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.549301  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.539654496 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.549628  743793 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:15.549663  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:15.549714  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:15.549744  743793 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.551716  743793 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:47:15.552847  743793 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:47:15.553932  743793 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:47:15.555067  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:15.555097  743793 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:47:15.555109  743793 cache.go:65] Caching tarball of preloaded images
	I1213 13:47:15.555160  743793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:47:15.555218  743793 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:47:15.555231  743793 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:47:15.555336  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.575398  743793 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:47:15.575441  743793 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:47:15.575458  743793 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:47:15.575494  743793 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:47:15.575570  743793 start.go:364] duration metric: took 43.268µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:47:15.575593  743793 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:47:15.575602  743793 fix.go:54] fixHost starting: 
	I1213 13:47:15.575821  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.591746  743793 fix.go:112] recreateIfNeeded on newest-cni-362964: state=Stopped err=<nil>
	W1213 13:47:15.591771  743793 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:47:15.593293  743793 out.go:252] * Restarting existing docker container for "newest-cni-362964" ...
	I1213 13:47:15.593360  743793 cli_runner.go:164] Run: docker start newest-cni-362964
	I1213 13:47:15.823041  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.840403  743793 kic.go:430] container "newest-cni-362964" state is running.
	I1213 13:47:15.840799  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:15.859084  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.859284  743793 machine.go:94] provisionDockerMachine start ...
	I1213 13:47:15.859348  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:15.877682  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:15.878002  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:15.878021  743793 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:47:15.878655  743793 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47684->127.0.0.1:33520: read: connection reset by peer
	I1213 13:47:19.010243  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.010275  743793 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:47:19.010342  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.028649  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.028926  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.028944  743793 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:47:19.167849  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.167956  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.185955  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.186189  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.186207  743793 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:47:19.317047  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:47:19.317087  743793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:47:19.317117  743793 ubuntu.go:190] setting up certificates
	I1213 13:47:19.317129  743793 provision.go:84] configureAuth start
	I1213 13:47:19.317214  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:19.335703  743793 provision.go:143] copyHostCerts
	I1213 13:47:19.335786  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:47:19.335809  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:47:19.335895  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:47:19.336007  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:47:19.336019  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:47:19.336046  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:47:19.336102  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:47:19.336109  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:47:19.336133  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:47:19.336181  743793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:47:19.394450  743793 provision.go:177] copyRemoteCerts
	I1213 13:47:19.394502  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:47:19.394534  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.411750  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.507173  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:47:19.523766  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:47:19.539629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:47:19.555589  743793 provision.go:87] duration metric: took 238.436679ms to configureAuth
	I1213 13:47:19.555611  743793 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:47:19.555811  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:19.555940  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.574297  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.574507  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.574528  743793 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:47:19.858331  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:47:19.858358  743793 machine.go:97] duration metric: took 3.999058826s to provisionDockerMachine
	I1213 13:47:19.858370  743793 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:47:19.858383  743793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:47:19.858433  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:47:19.858474  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.876049  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.971385  743793 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:47:19.974791  743793 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:47:19.974823  743793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:47:19.974837  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:47:19.974893  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:47:19.974988  743793 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:47:19.975100  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:47:19.983356  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:20.000196  743793 start.go:296] duration metric: took 141.812887ms for postStartSetup
	I1213 13:47:20.000258  743793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:47:20.000314  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.017528  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.108567  743793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:47:20.112996  743793 fix.go:56] duration metric: took 4.537388049s for fixHost
	I1213 13:47:20.113024  743793 start.go:83] releasing machines lock for "newest-cni-362964", held for 4.537441376s
	I1213 13:47:20.113088  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:20.130726  743793 ssh_runner.go:195] Run: cat /version.json
	I1213 13:47:20.130793  743793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:47:20.130802  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.130878  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.148822  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.150135  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.295420  743793 ssh_runner.go:195] Run: systemctl --version
	I1213 13:47:20.301859  743793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:47:20.336602  743793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:47:20.341211  743793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:47:20.341287  743793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:47:20.349054  743793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:47:20.349073  743793 start.go:496] detecting cgroup driver to use...
	I1213 13:47:20.349107  743793 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:47:20.349163  743793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:47:20.362399  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:47:20.374254  743793 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:47:20.374306  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:47:20.388234  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:47:20.399676  743793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:47:20.474562  743793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:47:20.554145  743793 docker.go:234] disabling docker service ...
	I1213 13:47:20.554226  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:47:20.568676  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:47:20.580110  743793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:47:20.658535  743793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:47:20.737176  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:47:20.748856  743793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:47:20.762068  743793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:47:20.762129  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.770675  743793 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:47:20.770733  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.779036  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.786966  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.795024  743793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:47:20.802620  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.810583  743793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.818388  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.826379  743793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:47:20.833195  743793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:47:20.839891  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:20.916966  743793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:47:21.051005  743793 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:47:21.051061  743793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:47:21.054826  743793 start.go:564] Will wait 60s for crictl version
	I1213 13:47:21.054891  743793 ssh_runner.go:195] Run: which crictl
	I1213 13:47:21.058302  743793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:47:21.081279  743793 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:47:21.081361  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.110234  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.139180  743793 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:47:21.140352  743793 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:47:21.158817  743793 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:47:21.162980  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.174525  743793 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:47:21.175553  743793 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:47:21.175710  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:21.175761  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.210092  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.210114  743793 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:47:21.210160  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.234690  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.234711  743793 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:47:21.234719  743793 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:47:21.234845  743793 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:47:21.234912  743793 ssh_runner.go:195] Run: crio config
	I1213 13:47:21.282460  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:21.282487  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:21.282509  743793 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:47:21.282539  743793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:47:21.282708  743793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:47:21.282807  743793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:47:21.290750  743793 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:47:21.290846  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:47:21.298168  743793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:47:21.310228  743793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:47:21.322581  743793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:47:21.334293  743793 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:47:21.337735  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.347200  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:21.425075  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:21.445740  743793 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:47:21.445766  743793 certs.go:195] generating shared ca certs ...
	I1213 13:47:21.445805  743793 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:21.445974  743793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:47:21.446031  743793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:47:21.446043  743793 certs.go:257] generating profile certs ...
	I1213 13:47:21.446154  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:47:21.446224  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:47:21.446272  743793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:47:21.446406  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:47:21.446452  743793 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:47:21.446466  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:47:21.446502  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:47:21.446547  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:47:21.446593  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:47:21.446654  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:21.447541  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:47:21.465298  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:47:21.483440  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:47:21.502103  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:47:21.522629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:47:21.542814  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:47:21.559135  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:47:21.575971  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:47:21.591916  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:47:21.608394  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:47:21.624618  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:47:21.642224  743793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:47:21.654261  743793 ssh_runner.go:195] Run: openssl version
	I1213 13:47:21.660050  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.668116  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:47:21.675369  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679216  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679263  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.712864  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:47:21.720130  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.727050  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:47:21.733917  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737465  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737512  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.771088  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:47:21.778112  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.784916  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:47:21.791680  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.794961  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.795003  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.829333  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:47:21.836601  743793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:47:21.840092  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:47:21.873865  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:47:21.907577  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:47:21.942677  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:47:21.990730  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:47:22.038527  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:47:22.089275  743793 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:22.089396  743793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:47:22.089456  743793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:47:22.124825  743793 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:22.124858  743793 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:22.124862  743793 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:22.124866  743793 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:22.124869  743793 cri.go:89] found id: ""
	I1213 13:47:22.124908  743793 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:47:22.137341  743793 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:22Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:22.137416  743793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:47:22.145362  743793 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:47:22.145377  743793 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:47:22.145421  743793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:47:22.152664  743793 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:47:22.153211  743793 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-362964" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.153502  743793 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-390571/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-362964" cluster setting kubeconfig missing "newest-cni-362964" context setting]
	I1213 13:47:22.154092  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.155563  743793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:47:22.163308  743793 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 13:47:22.163339  743793 kubeadm.go:602] duration metric: took 17.955654ms to restartPrimaryControlPlane
	I1213 13:47:22.163350  743793 kubeadm.go:403] duration metric: took 74.090212ms to StartCluster
	I1213 13:47:22.163370  743793 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.163433  743793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.164305  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.164552  743793 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:47:22.164629  743793 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:47:22.164752  743793 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-362964"
	I1213 13:47:22.164768  743793 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-362964"
	W1213 13:47:22.164802  743793 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:47:22.164794  743793 addons.go:70] Setting dashboard=true in profile "newest-cni-362964"
	I1213 13:47:22.164827  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:22.164851  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.164849  743793 addons.go:70] Setting default-storageclass=true in profile "newest-cni-362964"
	I1213 13:47:22.164885  743793 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-362964"
	I1213 13:47:22.164831  743793 addons.go:239] Setting addon dashboard=true in "newest-cni-362964"
	W1213 13:47:22.164998  743793 addons.go:248] addon dashboard should already be in state true
	I1213 13:47:22.165031  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.165174  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165349  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165507  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.166701  743793 out.go:179] * Verifying Kubernetes components...
	I1213 13:47:22.167981  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:22.192293  743793 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 13:47:22.192607  743793 addons.go:239] Setting addon default-storageclass=true in "newest-cni-362964"
	W1213 13:47:22.192634  743793 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:47:22.192668  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.193205  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.193489  743793 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:47:22.194545  743793 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 13:47:22.194630  743793 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.194650  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:47:22.194717  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.197270  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:47:22.197289  743793 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:47:22.197336  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.223581  743793 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.223609  743793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:47:22.223725  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.235014  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.236249  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.249859  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.322026  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:22.335417  743793 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:47:22.335494  743793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:47:22.347746  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:47:22.347860  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:47:22.348903  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.348995  743793 api_server.go:72] duration metric: took 184.405092ms to wait for apiserver process to appear ...
	I1213 13:47:22.349020  743793 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:47:22.349038  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:22.357705  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.364407  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:47:22.364428  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:47:22.378144  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:47:22.378163  743793 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:47:22.392593  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:47:22.392619  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:47:22.406406  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:47:22.406430  743793 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:47:22.420459  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:47:22.420499  743793 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:47:22.432910  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:47:22.432934  743793 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:47:22.444815  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:47:22.444841  743793 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:47:22.458380  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:22.458404  743793 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:47:22.471198  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:23.763047  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.763087  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.763104  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.768125  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.768149  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.849412  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.855339  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:23.855368  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.312519  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.963582954s)
	I1213 13:47:24.312612  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.954874796s)
	I1213 13:47:24.312784  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.841538299s)
	I1213 13:47:24.314353  743793 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-362964 addons enable metrics-server
	
	I1213 13:47:24.323239  743793 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:47:24.324346  743793 addons.go:530] duration metric: took 2.159730307s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 13:47:24.349242  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.353865  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.353887  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.849405  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.854959  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.854986  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:25.349483  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:25.353727  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 13:47:25.354695  743793 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 13:47:25.354720  743793 api_server.go:131] duration metric: took 3.00569336s to wait for apiserver health ...
	I1213 13:47:25.354729  743793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:47:25.358250  743793 system_pods.go:59] 8 kube-system pods found
	I1213 13:47:25.358279  743793 system_pods.go:61] "coredns-7d764666f9-rqktl" [7c70d7d0-5139-4893-905c-0e183495035e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358287  743793 system_pods.go:61] "etcd-newest-cni-362964" [49d03570-d59e-4e95-902f-1994733e6009] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:47:25.358295  743793 system_pods.go:61] "kindnet-qk8dn" [0df822e7-da1c-43ee-9a1e-b2131ae84e50] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:47:25.358303  743793 system_pods.go:61] "kube-apiserver-newest-cni-362964" [31c7799d-0188-4e2f-8d32-eb6e3ffe29ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:47:25.358315  743793 system_pods.go:61] "kube-controller-manager-newest-cni-362964" [cee82184-0e71-4dfb-8851-d642f2716578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:47:25.358325  743793 system_pods.go:61] "kube-proxy-97cpx" [c081628a-7cdd-4b8c-9d28-9d95707c6064] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:47:25.358338  743793 system_pods.go:61] "kube-scheduler-newest-cni-362964" [d160f41f-e904-4d11-9b2c-157bfcbc668f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:47:25.358346  743793 system_pods.go:61] "storage-provisioner" [b6d4689e-b3f1-496d-bfd4-11cb93ea7c15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358352  743793 system_pods.go:74] duration metric: took 3.617333ms to wait for pod list to return data ...
	I1213 13:47:25.358359  743793 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:47:25.360577  743793 default_sa.go:45] found service account: "default"
	I1213 13:47:25.360597  743793 default_sa.go:55] duration metric: took 2.231432ms for default service account to be created ...
	I1213 13:47:25.360614  743793 kubeadm.go:587] duration metric: took 3.196023464s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:25.360633  743793 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:47:25.362709  743793 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:47:25.362732  743793 node_conditions.go:123] node cpu capacity is 8
	I1213 13:47:25.362750  743793 node_conditions.go:105] duration metric: took 2.111782ms to run NodePressure ...
	I1213 13:47:25.362764  743793 start.go:242] waiting for startup goroutines ...
	I1213 13:47:25.362789  743793 start.go:247] waiting for cluster config update ...
	I1213 13:47:25.362806  743793 start.go:256] writing updated cluster config ...
	I1213 13:47:25.363125  743793 ssh_runner.go:195] Run: rm -f paused
	I1213 13:47:25.410985  743793 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:47:25.412352  743793 out.go:179] * Done! kubectl is now configured to use "newest-cni-362964" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.81802484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.821501957Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb6c33f3-b811-48b2-a930-72913645a535 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.822288055Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e64dc6f9-7d38-412b-b763-a27304bbc37c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.823230713Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.823884086Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.824148801Z" level=info msg="Ran pod sandbox 7873dae1ce5b0f115a6d299d9641ad4beb45e4f13fea584a79be13a1ca02fefa with infra container: kube-system/kindnet-qk8dn/POD" id=bb6c33f3-b811-48b2-a930-72913645a535 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.824759202Z" level=info msg="Ran pod sandbox ecd6a972053346198564a51143eef815dd0685d7d21fea2e9e2c29b0488f4146 with infra container: kube-system/kube-proxy-97cpx/POD" id=e64dc6f9-7d38-412b-b763-a27304bbc37c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.825475942Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2c3b201f-f517-4988-9cf3-e8beebae70db name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.825822358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e27df904-ff39-4dd1-97ba-d0bc7e9f5546 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.8264906Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=77bca4af-4d48-4d90-a09f-b765f36e6402 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.827000178Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bf1f008a-74fa-476a-a9d5-661b69d5976c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.827958794Z" level=info msg="Creating container: kube-system/kube-proxy-97cpx/kube-proxy" id=25569584-5ccf-4678-80a1-a0b1c1bf7851 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.827984324Z" level=info msg="Creating container: kube-system/kindnet-qk8dn/kindnet-cni" id=8286d548-bccf-46c9-8bc9-3ae0e57199bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.828045975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.828063389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.833871107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.834567397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.835557193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.836028032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.866434361Z" level=info msg="Created container 2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87: kube-system/kindnet-qk8dn/kindnet-cni" id=8286d548-bccf-46c9-8bc9-3ae0e57199bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.86708497Z" level=info msg="Starting container: 2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87" id=949af17e-8728-4924-b500-5a0b7d23d95d name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.868826905Z" level=info msg="Created container af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937: kube-system/kube-proxy-97cpx/kube-proxy" id=25569584-5ccf-4678-80a1-a0b1c1bf7851 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.8688955Z" level=info msg="Started container" PID=1063 containerID=2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87 description=kube-system/kindnet-qk8dn/kindnet-cni id=949af17e-8728-4924-b500-5a0b7d23d95d name=/runtime.v1.RuntimeService/StartContainer sandboxID=7873dae1ce5b0f115a6d299d9641ad4beb45e4f13fea584a79be13a1ca02fefa
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.87122653Z" level=info msg="Starting container: af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937" id=05d04bbd-4018-421b-b5c8-3710e473e204 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.874991347Z" level=info msg="Started container" PID=1062 containerID=af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937 description=kube-system/kube-proxy-97cpx/kube-proxy id=05d04bbd-4018-421b-b5c8-3710e473e204 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ecd6a972053346198564a51143eef815dd0685d7d21fea2e9e2c29b0488f4146
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2a7d6843350bc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   7873dae1ce5b0       kindnet-qk8dn                               kube-system
	af96593a4aff2       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   ecd6a97205334       kube-proxy-97cpx                            kube-system
	5da4caf87e21e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   46e4eb8e7c6f2       kube-controller-manager-newest-cni-362964   kube-system
	bf5900d175c7d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   311d1ac61118e       etcd-newest-cni-362964                      kube-system
	110b53112a1a2       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   78b27c52b8f68       kube-scheduler-newest-cni-362964            kube-system
	467a2e1a14516       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   62bfcab2f7045       kube-apiserver-newest-cni-362964            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-362964
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-362964
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=newest-cni-362964
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_46_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:46:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-362964
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-362964
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                7984a389-2b8f-4f40-bc98-e167ef24613c
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-362964                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-qk8dn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-362964             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-362964    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-97cpx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-362964             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-362964 event: Registered Node newest-cni-362964 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-362964 event: Registered Node newest-cni-362964 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b] <==
	{"level":"warn","ts":"2025-12-13T13:47:23.161413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.167622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.173870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.182553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.188737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.195729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.208946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.215275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.221427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.227745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.234085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.253062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.265268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.271544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.277725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.284912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.291362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.297633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.304078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.310676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.330522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.336963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.343053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.349595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.397236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:47:29 up  2:29,  0 user,  load average: 4.32, 4.14, 2.74
	Linux newest-cni-362964 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87] <==
	I1213 13:47:25.049159       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:47:25.049400       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 13:47:25.049535       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:47:25.049552       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:47:25.049595       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:47:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:47:25.248770       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:47:25.248850       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:47:25.248865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:47:25.249015       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609] <==
	I1213 13:47:23.847395       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 13:47:23.847658       1 aggregator.go:187] initial CRD sync complete...
	I1213 13:47:23.847668       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:47:23.847674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:47:23.847681       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:47:23.847848       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 13:47:23.847260       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 13:47:23.847348       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 13:47:23.854117       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:47:23.865021       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 13:47:23.868214       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:23.868233       1 policy_source.go:248] refreshing policies
	I1213 13:47:23.879150       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:47:24.112121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:47:24.136539       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:47:24.152961       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:47:24.158825       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:47:24.165073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:47:24.193919       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.157.228"}
	I1213 13:47:24.204362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.53.170"}
	I1213 13:47:24.750067       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 13:47:27.386264       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:47:27.386310       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:47:27.440130       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:47:27.534495       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434] <==
	I1213 13:47:26.991811       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-362964"
	I1213 13:47:26.991889       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 13:47:26.992568       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.992657       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.994844       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:26.997928       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.999456       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.999499       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.999924       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.001857       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.001951       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.001992       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.003158       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.003178       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.003298       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.004310       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.004326       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.004360       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.005586       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.010742       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.013962       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.013973       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:47:27.013977       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 13:47:27.016276       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.095753       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937] <==
	I1213 13:47:24.913801       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:47:24.985504       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:25.086565       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:25.086604       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1213 13:47:25.086709       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:47:25.104572       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:47:25.104643       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:47:25.109745       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:47:25.110206       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:47:25.110234       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:47:25.112117       1 config.go:200] "Starting service config controller"
	I1213 13:47:25.112141       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:47:25.111822       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:47:25.112169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:47:25.112217       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:47:25.112242       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:47:25.112291       1 config.go:309] "Starting node config controller"
	I1213 13:47:25.112302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:47:25.213201       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:47:25.213276       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:47:25.213295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:47:25.213325       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0] <==
	I1213 13:47:22.313179       1 serving.go:386] Generated self-signed cert in-memory
	W1213 13:47:23.765610       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 13:47:23.765647       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:47:23.765659       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 13:47:23.765669       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 13:47:23.802597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 13:47:23.802629       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:47:23.805627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:47:23.805698       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:23.807304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:47:23.807658       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:47:23.906299       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.971332     677 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-362964"
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.971439     677 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-362964"
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.971477     677 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.972367     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.013855     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.019664     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-362964\" already exists" pod="kube-system/kube-controller-manager-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.019820     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-362964" containerName="kube-controller-manager"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.508919     677 apiserver.go:52] "Watching apiserver"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.516651     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.553356     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-362964" containerName="kube-controller-manager"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.553418     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.553622     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-362964" containerName="etcd"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.553717     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-362964" containerName="kube-scheduler"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.558642     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-362964\" already exists" pod="kube-system/kube-apiserver-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.558721     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-362964" containerName="kube-apiserver"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569576     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c081628a-7cdd-4b8c-9d28-9d95707c6064-xtables-lock\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569607     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c081628a-7cdd-4b8c-9d28-9d95707c6064-lib-modules\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569635     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-cni-cfg\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569657     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-xtables-lock\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569699     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-lib-modules\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:25 newest-cni-362964 kubelet[677]: E1213 13:47:25.558465     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-362964" containerName="kube-apiserver"
	Dec 13 13:47:26 newest-cni-362964 kubelet[677]: E1213 13:47:26.073655     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-362964" containerName="kube-scheduler"
	Dec 13 13:47:26 newest-cni-362964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:47:26 newest-cni-362964 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:47:26 newest-cni-362964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-362964 -n newest-cni-362964
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-362964 -n newest-cni-362964: exit status 2 (338.421202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-362964 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt: exit status 1 (69.869722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-rqktl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-t2g2n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-lghjt" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-362964
helpers_test.go:244: (dbg) docker inspect newest-cni-362964:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41",
	        "Created": "2025-12-13T13:46:43.902071196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743998,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:47:15.617130493Z",
	            "FinishedAt": "2025-12-13T13:47:14.7743152Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/hosts",
	        "LogPath": "/var/lib/docker/containers/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41/a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41-json.log",
	        "Name": "/newest-cni-362964",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-362964:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-362964",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a8feb9db9236a02e52470eeb356ca97717847d767470ea656413e040d80f3a41",
	                "LowerDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/591d532192eba7f9513e2e7a3f154ba6c3bec034fce6fcc25e10cc29cfa2afeb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-362964",
	                "Source": "/var/lib/docker/volumes/newest-cni-362964/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-362964",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-362964",
	                "name.minikube.sigs.k8s.io": "newest-cni-362964",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cb0f155dc1b5d3a688e5599a95e8671795650988558c901efb92d1fa0ade0db2",
	            "SandboxKey": "/var/run/docker/netns/cb0f155dc1b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-362964": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a5cddf8c31ff9d3f9d9f694626ad7e5d879d33f2650fe55e248b8c0b8c028028",
	                    "EndpointID": "82a8372c79df8eefcb4fb4110a08b13a7be607c464d25deaf5e1c2f64a34e91a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ae:a4:98:55:d5:2e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-362964",
	                        "a8feb9db9236"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964: exit status 2 (333.720983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-362964 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ stop    │ -p newest-cni-362964 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-362964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ newest-cni-362964 image list --format=json                                                                                                                                                                                                           │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p newest-cni-362964 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ image   │ default-k8s-diff-port-038239 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p default-k8s-diff-port-038239 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:47:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:47:15.396688  743793 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:15.396994  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397008  743793 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:15.397013  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397213  743793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:47:15.397667  743793 out.go:368] Setting JSON to false
	I1213 13:47:15.398880  743793 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8983,"bootTime":1765624652,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:47:15.398951  743793 start.go:143] virtualization: kvm guest
	I1213 13:47:15.401034  743793 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:47:15.402467  743793 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:47:15.402473  743793 notify.go:221] Checking for updates...
	I1213 13:47:15.404652  743793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:47:15.406062  743793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:15.407288  743793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:47:15.408413  743793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:47:15.409475  743793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:47:15.411121  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:15.411682  743793 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:47:15.438362  743793 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:47:15.438448  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.493074  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.482438278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.493183  743793 docker.go:319] overlay module found
	I1213 13:47:15.494725  743793 out.go:179] * Using the docker driver based on existing profile
	I1213 13:47:15.495689  743793 start.go:309] selected driver: docker
	I1213 13:47:15.495700  743793 start.go:927] validating driver "docker" against &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.495792  743793 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:47:15.496338  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.549301  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.539654496 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.549628  743793 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:15.549663  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:15.549714  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:15.549744  743793 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.551716  743793 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:47:15.552847  743793 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:47:15.553932  743793 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:47:15.555067  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:15.555097  743793 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:47:15.555109  743793 cache.go:65] Caching tarball of preloaded images
	I1213 13:47:15.555160  743793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:47:15.555218  743793 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:47:15.555231  743793 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:47:15.555336  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.575398  743793 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:47:15.575441  743793 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:47:15.575458  743793 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:47:15.575494  743793 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:47:15.575570  743793 start.go:364] duration metric: took 43.268µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:47:15.575593  743793 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:47:15.575602  743793 fix.go:54] fixHost starting: 
	I1213 13:47:15.575821  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.591746  743793 fix.go:112] recreateIfNeeded on newest-cni-362964: state=Stopped err=<nil>
	W1213 13:47:15.591771  743793 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:47:15.593293  743793 out.go:252] * Restarting existing docker container for "newest-cni-362964" ...
	I1213 13:47:15.593360  743793 cli_runner.go:164] Run: docker start newest-cni-362964
	I1213 13:47:15.823041  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.840403  743793 kic.go:430] container "newest-cni-362964" state is running.
	I1213 13:47:15.840799  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:15.859084  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.859284  743793 machine.go:94] provisionDockerMachine start ...
	I1213 13:47:15.859348  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:15.877682  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:15.878002  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:15.878021  743793 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:47:15.878655  743793 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47684->127.0.0.1:33520: read: connection reset by peer
	I1213 13:47:19.010243  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.010275  743793 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:47:19.010342  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.028649  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.028926  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.028944  743793 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:47:19.167849  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.167956  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.185955  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.186189  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.186207  743793 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:47:19.317047  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:47:19.317087  743793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:47:19.317117  743793 ubuntu.go:190] setting up certificates
	I1213 13:47:19.317129  743793 provision.go:84] configureAuth start
	I1213 13:47:19.317214  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:19.335703  743793 provision.go:143] copyHostCerts
	I1213 13:47:19.335786  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:47:19.335809  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:47:19.335895  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:47:19.336007  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:47:19.336019  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:47:19.336046  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:47:19.336102  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:47:19.336109  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:47:19.336133  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:47:19.336181  743793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:47:19.394450  743793 provision.go:177] copyRemoteCerts
	I1213 13:47:19.394502  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:47:19.394534  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.411750  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.507173  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:47:19.523766  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:47:19.539629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:47:19.555589  743793 provision.go:87] duration metric: took 238.436679ms to configureAuth
	I1213 13:47:19.555611  743793 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:47:19.555811  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:19.555940  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.574297  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.574507  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.574528  743793 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:47:19.858331  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:47:19.858358  743793 machine.go:97] duration metric: took 3.999058826s to provisionDockerMachine
	I1213 13:47:19.858370  743793 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:47:19.858383  743793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:47:19.858433  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:47:19.858474  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.876049  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.971385  743793 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:47:19.974791  743793 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:47:19.974823  743793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:47:19.974837  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:47:19.974893  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:47:19.974988  743793 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:47:19.975100  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:47:19.983356  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:20.000196  743793 start.go:296] duration metric: took 141.812887ms for postStartSetup
	I1213 13:47:20.000258  743793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:47:20.000314  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.017528  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.108567  743793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:47:20.112996  743793 fix.go:56] duration metric: took 4.537388049s for fixHost
	I1213 13:47:20.113024  743793 start.go:83] releasing machines lock for "newest-cni-362964", held for 4.537441376s
	I1213 13:47:20.113088  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:20.130726  743793 ssh_runner.go:195] Run: cat /version.json
	I1213 13:47:20.130793  743793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:47:20.130802  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.130878  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.148822  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.150135  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.295420  743793 ssh_runner.go:195] Run: systemctl --version
	I1213 13:47:20.301859  743793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:47:20.336602  743793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:47:20.341211  743793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:47:20.341287  743793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:47:20.349054  743793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:47:20.349073  743793 start.go:496] detecting cgroup driver to use...
	I1213 13:47:20.349107  743793 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:47:20.349163  743793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:47:20.362399  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:47:20.374254  743793 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:47:20.374306  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:47:20.388234  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:47:20.399676  743793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:47:20.474562  743793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:47:20.554145  743793 docker.go:234] disabling docker service ...
	I1213 13:47:20.554226  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:47:20.568676  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:47:20.580110  743793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:47:20.658535  743793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:47:20.737176  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:47:20.748856  743793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:47:20.762068  743793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:47:20.762129  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.770675  743793 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:47:20.770733  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.779036  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.786966  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.795024  743793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:47:20.802620  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.810583  743793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.818388  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.826379  743793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:47:20.833195  743793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:47:20.839891  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:20.916966  743793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:47:21.051005  743793 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:47:21.051061  743793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:47:21.054826  743793 start.go:564] Will wait 60s for crictl version
	I1213 13:47:21.054891  743793 ssh_runner.go:195] Run: which crictl
	I1213 13:47:21.058302  743793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:47:21.081279  743793 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:47:21.081361  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.110234  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.139180  743793 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:47:21.140352  743793 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:47:21.158817  743793 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:47:21.162980  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.174525  743793 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:47:21.175553  743793 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:47:21.175710  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:21.175761  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.210092  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.210114  743793 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:47:21.210160  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.234690  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.234711  743793 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:47:21.234719  743793 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:47:21.234845  743793 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:47:21.234912  743793 ssh_runner.go:195] Run: crio config
	I1213 13:47:21.282460  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:21.282487  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:21.282509  743793 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:47:21.282539  743793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:47:21.282708  743793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:47:21.282807  743793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:47:21.290750  743793 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:47:21.290846  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:47:21.298168  743793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:47:21.310228  743793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:47:21.322581  743793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:47:21.334293  743793 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:47:21.337735  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.347200  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:21.425075  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:21.445740  743793 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:47:21.445766  743793 certs.go:195] generating shared ca certs ...
	I1213 13:47:21.445805  743793 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:21.445974  743793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:47:21.446031  743793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:47:21.446043  743793 certs.go:257] generating profile certs ...
	I1213 13:47:21.446154  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:47:21.446224  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:47:21.446272  743793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:47:21.446406  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:47:21.446452  743793 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:47:21.446466  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:47:21.446502  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:47:21.446547  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:47:21.446593  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:47:21.446654  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:21.447541  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:47:21.465298  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:47:21.483440  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:47:21.502103  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:47:21.522629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:47:21.542814  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:47:21.559135  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:47:21.575971  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:47:21.591916  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:47:21.608394  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:47:21.624618  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:47:21.642224  743793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:47:21.654261  743793 ssh_runner.go:195] Run: openssl version
	I1213 13:47:21.660050  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.668116  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:47:21.675369  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679216  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679263  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.712864  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:47:21.720130  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.727050  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:47:21.733917  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737465  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737512  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.771088  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:47:21.778112  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.784916  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:47:21.791680  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.794961  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.795003  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.829333  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:47:21.836601  743793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:47:21.840092  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:47:21.873865  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:47:21.907577  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:47:21.942677  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:47:21.990730  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:47:22.038527  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:47:22.089275  743793 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:22.089396  743793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:47:22.089456  743793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:47:22.124825  743793 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:22.124858  743793 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:22.124862  743793 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:22.124866  743793 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:22.124869  743793 cri.go:89] found id: ""
	I1213 13:47:22.124908  743793 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:47:22.137341  743793 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:22Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:22.137416  743793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:47:22.145362  743793 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:47:22.145377  743793 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:47:22.145421  743793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:47:22.152664  743793 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:47:22.153211  743793 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-362964" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.153502  743793 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-390571/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-362964" cluster setting kubeconfig missing "newest-cni-362964" context setting]
	I1213 13:47:22.154092  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.155563  743793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:47:22.163308  743793 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 13:47:22.163339  743793 kubeadm.go:602] duration metric: took 17.955654ms to restartPrimaryControlPlane
	I1213 13:47:22.163350  743793 kubeadm.go:403] duration metric: took 74.090212ms to StartCluster
	I1213 13:47:22.163370  743793 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.163433  743793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.164305  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.164552  743793 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:47:22.164629  743793 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:47:22.164752  743793 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-362964"
	I1213 13:47:22.164768  743793 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-362964"
	W1213 13:47:22.164802  743793 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:47:22.164794  743793 addons.go:70] Setting dashboard=true in profile "newest-cni-362964"
	I1213 13:47:22.164827  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:22.164851  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.164849  743793 addons.go:70] Setting default-storageclass=true in profile "newest-cni-362964"
	I1213 13:47:22.164885  743793 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-362964"
	I1213 13:47:22.164831  743793 addons.go:239] Setting addon dashboard=true in "newest-cni-362964"
	W1213 13:47:22.164998  743793 addons.go:248] addon dashboard should already be in state true
	I1213 13:47:22.165031  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.165174  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165349  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165507  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.166701  743793 out.go:179] * Verifying Kubernetes components...
	I1213 13:47:22.167981  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:22.192293  743793 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 13:47:22.192607  743793 addons.go:239] Setting addon default-storageclass=true in "newest-cni-362964"
	W1213 13:47:22.192634  743793 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:47:22.192668  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.193205  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.193489  743793 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:47:22.194545  743793 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 13:47:22.194630  743793 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.194650  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:47:22.194717  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.197270  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:47:22.197289  743793 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:47:22.197336  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.223581  743793 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.223609  743793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:47:22.223725  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.235014  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.236249  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.249859  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.322026  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:22.335417  743793 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:47:22.335494  743793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:47:22.347746  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:47:22.347860  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:47:22.348903  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.348995  743793 api_server.go:72] duration metric: took 184.405092ms to wait for apiserver process to appear ...
	I1213 13:47:22.349020  743793 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:47:22.349038  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:22.357705  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.364407  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:47:22.364428  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:47:22.378144  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:47:22.378163  743793 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:47:22.392593  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:47:22.392619  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:47:22.406406  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:47:22.406430  743793 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:47:22.420459  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:47:22.420499  743793 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:47:22.432910  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:47:22.432934  743793 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:47:22.444815  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:47:22.444841  743793 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:47:22.458380  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:22.458404  743793 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:47:22.471198  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:23.763047  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.763087  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.763104  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.768125  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.768149  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.849412  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.855339  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:23.855368  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.312519  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.963582954s)
	I1213 13:47:24.312612  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.954874796s)
	I1213 13:47:24.312784  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.841538299s)
	I1213 13:47:24.314353  743793 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-362964 addons enable metrics-server
	
	I1213 13:47:24.323239  743793 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:47:24.324346  743793 addons.go:530] duration metric: took 2.159730307s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 13:47:24.349242  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.353865  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.353887  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.849405  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.854959  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.854986  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:25.349483  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:25.353727  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 13:47:25.354695  743793 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 13:47:25.354720  743793 api_server.go:131] duration metric: took 3.00569336s to wait for apiserver health ...
	I1213 13:47:25.354729  743793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:47:25.358250  743793 system_pods.go:59] 8 kube-system pods found
	I1213 13:47:25.358279  743793 system_pods.go:61] "coredns-7d764666f9-rqktl" [7c70d7d0-5139-4893-905c-0e183495035e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358287  743793 system_pods.go:61] "etcd-newest-cni-362964" [49d03570-d59e-4e95-902f-1994733e6009] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:47:25.358295  743793 system_pods.go:61] "kindnet-qk8dn" [0df822e7-da1c-43ee-9a1e-b2131ae84e50] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:47:25.358303  743793 system_pods.go:61] "kube-apiserver-newest-cni-362964" [31c7799d-0188-4e2f-8d32-eb6e3ffe29ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:47:25.358315  743793 system_pods.go:61] "kube-controller-manager-newest-cni-362964" [cee82184-0e71-4dfb-8851-d642f2716578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:47:25.358325  743793 system_pods.go:61] "kube-proxy-97cpx" [c081628a-7cdd-4b8c-9d28-9d95707c6064] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:47:25.358338  743793 system_pods.go:61] "kube-scheduler-newest-cni-362964" [d160f41f-e904-4d11-9b2c-157bfcbc668f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:47:25.358346  743793 system_pods.go:61] "storage-provisioner" [b6d4689e-b3f1-496d-bfd4-11cb93ea7c15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358352  743793 system_pods.go:74] duration metric: took 3.617333ms to wait for pod list to return data ...
	I1213 13:47:25.358359  743793 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:47:25.360577  743793 default_sa.go:45] found service account: "default"
	I1213 13:47:25.360597  743793 default_sa.go:55] duration metric: took 2.231432ms for default service account to be created ...
	I1213 13:47:25.360614  743793 kubeadm.go:587] duration metric: took 3.196023464s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:25.360633  743793 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:47:25.362709  743793 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:47:25.362732  743793 node_conditions.go:123] node cpu capacity is 8
	I1213 13:47:25.362750  743793 node_conditions.go:105] duration metric: took 2.111782ms to run NodePressure ...
	I1213 13:47:25.362764  743793 start.go:242] waiting for startup goroutines ...
	I1213 13:47:25.362789  743793 start.go:247] waiting for cluster config update ...
	I1213 13:47:25.362806  743793 start.go:256] writing updated cluster config ...
	I1213 13:47:25.363125  743793 ssh_runner.go:195] Run: rm -f paused
	I1213 13:47:25.410985  743793 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:47:25.412352  743793 out.go:179] * Done! kubectl is now configured to use "newest-cni-362964" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.81802484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.821501957Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb6c33f3-b811-48b2-a930-72913645a535 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.822288055Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e64dc6f9-7d38-412b-b763-a27304bbc37c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.823230713Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.823884086Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.824148801Z" level=info msg="Ran pod sandbox 7873dae1ce5b0f115a6d299d9641ad4beb45e4f13fea584a79be13a1ca02fefa with infra container: kube-system/kindnet-qk8dn/POD" id=bb6c33f3-b811-48b2-a930-72913645a535 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.824759202Z" level=info msg="Ran pod sandbox ecd6a972053346198564a51143eef815dd0685d7d21fea2e9e2c29b0488f4146 with infra container: kube-system/kube-proxy-97cpx/POD" id=e64dc6f9-7d38-412b-b763-a27304bbc37c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.825475942Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2c3b201f-f517-4988-9cf3-e8beebae70db name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.825822358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e27df904-ff39-4dd1-97ba-d0bc7e9f5546 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.8264906Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=77bca4af-4d48-4d90-a09f-b765f36e6402 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.827000178Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bf1f008a-74fa-476a-a9d5-661b69d5976c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.827958794Z" level=info msg="Creating container: kube-system/kube-proxy-97cpx/kube-proxy" id=25569584-5ccf-4678-80a1-a0b1c1bf7851 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.827984324Z" level=info msg="Creating container: kube-system/kindnet-qk8dn/kindnet-cni" id=8286d548-bccf-46c9-8bc9-3ae0e57199bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.828045975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.828063389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.833871107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.834567397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.835557193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.836028032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.866434361Z" level=info msg="Created container 2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87: kube-system/kindnet-qk8dn/kindnet-cni" id=8286d548-bccf-46c9-8bc9-3ae0e57199bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.86708497Z" level=info msg="Starting container: 2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87" id=949af17e-8728-4924-b500-5a0b7d23d95d name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.868826905Z" level=info msg="Created container af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937: kube-system/kube-proxy-97cpx/kube-proxy" id=25569584-5ccf-4678-80a1-a0b1c1bf7851 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.8688955Z" level=info msg="Started container" PID=1063 containerID=2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87 description=kube-system/kindnet-qk8dn/kindnet-cni id=949af17e-8728-4924-b500-5a0b7d23d95d name=/runtime.v1.RuntimeService/StartContainer sandboxID=7873dae1ce5b0f115a6d299d9641ad4beb45e4f13fea584a79be13a1ca02fefa
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.87122653Z" level=info msg="Starting container: af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937" id=05d04bbd-4018-421b-b5c8-3710e473e204 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:24 newest-cni-362964 crio[526]: time="2025-12-13T13:47:24.874991347Z" level=info msg="Started container" PID=1062 containerID=af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937 description=kube-system/kube-proxy-97cpx/kube-proxy id=05d04bbd-4018-421b-b5c8-3710e473e204 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ecd6a972053346198564a51143eef815dd0685d7d21fea2e9e2c29b0488f4146
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2a7d6843350bc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   7873dae1ce5b0       kindnet-qk8dn                               kube-system
	af96593a4aff2       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   ecd6a97205334       kube-proxy-97cpx                            kube-system
	5da4caf87e21e       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   9 seconds ago       Running             kube-controller-manager   1                   46e4eb8e7c6f2       kube-controller-manager-newest-cni-362964   kube-system
	bf5900d175c7d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   9 seconds ago       Running             etcd                      1                   311d1ac61118e       etcd-newest-cni-362964                      kube-system
	110b53112a1a2       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   9 seconds ago       Running             kube-scheduler            1                   78b27c52b8f68       kube-scheduler-newest-cni-362964            kube-system
	467a2e1a14516       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   9 seconds ago       Running             kube-apiserver            1                   62bfcab2f7045       kube-apiserver-newest-cni-362964            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-362964
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-362964
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=newest-cni-362964
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_46_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:46:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-362964
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 13 Dec 2025 13:47:23 +0000   Sat, 13 Dec 2025 13:46:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-362964
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                7984a389-2b8f-4f40-bc98-e167ef24613c
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-362964                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-qk8dn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-362964             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-362964    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-97cpx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-362964             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-362964 event: Registered Node newest-cni-362964 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-362964 event: Registered Node newest-cni-362964 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b] <==
	{"level":"warn","ts":"2025-12-13T13:47:23.161413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.167622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.173870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.182553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.188737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.195729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.208946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.215275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.221427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.227745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.234085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.253062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.265268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.271544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.277725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.284912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.291362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.297633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.304078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.310676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.330522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.336963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.343053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.349595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:47:23.397236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:47:31 up  2:29,  0 user,  load average: 4.14, 4.11, 2.73
	Linux newest-cni-362964 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2a7d6843350bc29728cd2ac4b2bc9e2e65103e590f5de16c0ae3fbedc865ba87] <==
	I1213 13:47:25.049159       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:47:25.049400       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 13:47:25.049535       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:47:25.049552       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:47:25.049595       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:47:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:47:25.248770       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:47:25.248850       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:47:25.248865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:47:25.249015       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609] <==
	I1213 13:47:23.847395       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 13:47:23.847658       1 aggregator.go:187] initial CRD sync complete...
	I1213 13:47:23.847668       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:47:23.847674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:47:23.847681       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:47:23.847848       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 13:47:23.847260       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 13:47:23.847348       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 13:47:23.854117       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:47:23.865021       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 13:47:23.868214       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:23.868233       1 policy_source.go:248] refreshing policies
	I1213 13:47:23.879150       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:47:24.112121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:47:24.136539       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:47:24.152961       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:47:24.158825       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:47:24.165073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:47:24.193919       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.157.228"}
	I1213 13:47:24.204362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.53.170"}
	I1213 13:47:24.750067       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 13:47:27.386264       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:47:27.386310       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:47:27.440130       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 13:47:27.534495       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434] <==
	I1213 13:47:26.991811       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-362964"
	I1213 13:47:26.991889       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 13:47:26.992568       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.992657       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.994844       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:26.997928       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.999456       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.999499       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:26.999924       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.001857       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.001951       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.001992       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.003158       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.003178       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.003298       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.004310       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.004326       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.004360       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.005586       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.010742       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.013962       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.013973       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 13:47:27.013977       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 13:47:27.016276       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:27.095753       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [af96593a4aff28be333ae03c866d3b489ad8ed84e208d79e37b6f05de74ac937] <==
	I1213 13:47:24.913801       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:47:24.985504       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:25.086565       1 shared_informer.go:377] "Caches are synced"
	I1213 13:47:25.086604       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1213 13:47:25.086709       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:47:25.104572       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:47:25.104643       1 server_linux.go:136] "Using iptables Proxier"
	I1213 13:47:25.109745       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:47:25.110206       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 13:47:25.110234       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:47:25.112117       1 config.go:200] "Starting service config controller"
	I1213 13:47:25.112141       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:47:25.111822       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:47:25.112169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:47:25.112217       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:47:25.112242       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:47:25.112291       1 config.go:309] "Starting node config controller"
	I1213 13:47:25.112302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:47:25.213201       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 13:47:25.213276       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:47:25.213295       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:47:25.213325       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0] <==
	I1213 13:47:22.313179       1 serving.go:386] Generated self-signed cert in-memory
	W1213 13:47:23.765610       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 13:47:23.765647       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 13:47:23.765659       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 13:47:23.765669       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 13:47:23.802597       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 13:47:23.802629       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:47:23.805627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:47:23.805698       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 13:47:23.807304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:47:23.807658       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:47:23.906299       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.971332     677 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-362964"
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.971439     677 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-362964"
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.971477     677 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 13 13:47:23 newest-cni-362964 kubelet[677]: I1213 13:47:23.972367     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.013855     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.019664     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-362964\" already exists" pod="kube-system/kube-controller-manager-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.019820     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-362964" containerName="kube-controller-manager"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.508919     677 apiserver.go:52] "Watching apiserver"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.516651     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.553356     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-362964" containerName="kube-controller-manager"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.553418     677 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.553622     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-362964" containerName="etcd"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.553717     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-362964" containerName="kube-scheduler"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.558642     677 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-362964\" already exists" pod="kube-system/kube-apiserver-newest-cni-362964"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: E1213 13:47:24.558721     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-362964" containerName="kube-apiserver"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569576     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c081628a-7cdd-4b8c-9d28-9d95707c6064-xtables-lock\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569607     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c081628a-7cdd-4b8c-9d28-9d95707c6064-lib-modules\") pod \"kube-proxy-97cpx\" (UID: \"c081628a-7cdd-4b8c-9d28-9d95707c6064\") " pod="kube-system/kube-proxy-97cpx"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569635     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-cni-cfg\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569657     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-xtables-lock\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:24 newest-cni-362964 kubelet[677]: I1213 13:47:24.569699     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df822e7-da1c-43ee-9a1e-b2131ae84e50-lib-modules\") pod \"kindnet-qk8dn\" (UID: \"0df822e7-da1c-43ee-9a1e-b2131ae84e50\") " pod="kube-system/kindnet-qk8dn"
	Dec 13 13:47:25 newest-cni-362964 kubelet[677]: E1213 13:47:25.558465     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-362964" containerName="kube-apiserver"
	Dec 13 13:47:26 newest-cni-362964 kubelet[677]: E1213 13:47:26.073655     677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-362964" containerName="kube-scheduler"
	Dec 13 13:47:26 newest-cni-362964 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:47:26 newest-cni-362964 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:47:26 newest-cni-362964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-362964 -n newest-cni-362964
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-362964 -n newest-cni-362964: exit status 2 (343.661204ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-362964 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt: exit status 1 (65.791445ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-rqktl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-t2g2n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-lghjt" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-362964 describe pod coredns-7d764666f9-rqktl storage-provisioner dashboard-metrics-scraper-867fb5f87b-t2g2n kubernetes-dashboard-b84665fb8-lghjt: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-038239 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-038239 --alsologtostderr -v=1: exit status 80 (2.363471283s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-038239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:47:26.478695  745989 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:26.478839  745989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:26.478849  745989 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:26.478853  745989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:26.479039  745989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:47:26.479250  745989 out.go:368] Setting JSON to false
	I1213 13:47:26.479268  745989 mustload.go:66] Loading cluster: default-k8s-diff-port-038239
	I1213 13:47:26.479682  745989 config.go:182] Loaded profile config "default-k8s-diff-port-038239": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:47:26.480117  745989 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-038239 --format={{.State.Status}}
	I1213 13:47:26.499858  745989 host.go:66] Checking if "default-k8s-diff-port-038239" exists ...
	I1213 13:47:26.500176  745989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:26.560691  745989 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-13 13:47:26.550527936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:26.561658  745989 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-038239 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 13:47:26.563451  745989 out.go:179] * Pausing node default-k8s-diff-port-038239 ... 
	I1213 13:47:26.564546  745989 host.go:66] Checking if "default-k8s-diff-port-038239" exists ...
	I1213 13:47:26.564878  745989 ssh_runner.go:195] Run: systemctl --version
	I1213 13:47:26.564924  745989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-038239
	I1213 13:47:26.581948  745989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/default-k8s-diff-port-038239/id_rsa Username:docker}
	I1213 13:47:26.676447  745989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:26.689587  745989 pause.go:52] kubelet running: true
	I1213 13:47:26.689656  745989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:26.863755  745989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:26.863913  745989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:26.945972  745989 cri.go:89] found id: "0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375"
	I1213 13:47:26.946000  745989 cri.go:89] found id: "8405085ebe705ba7b423d0c2b5d4883fb997fcd33bd7a956ea769773c6341e34"
	I1213 13:47:26.946006  745989 cri.go:89] found id: "5795013750c24c2283403e4012508ffdb318fa62ff8d0f01376a3b277bfc99f8"
	I1213 13:47:26.946012  745989 cri.go:89] found id: "64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1"
	I1213 13:47:26.946016  745989 cri.go:89] found id: "ece6cabbfd4de4c0bf871f5e57ce5c4769621be9c162b8537aebfca43ac97e90"
	I1213 13:47:26.946019  745989 cri.go:89] found id: "e666bfd89f30f85cd8c1e8c64c04b77df4cb27f6c7df7838bdfaf6bf54d5ab00"
	I1213 13:47:26.946032  745989 cri.go:89] found id: "99cbb0e73d2197ad662dba2a00e0ec2f3ce53cd9276e552c1ca3a62cac601105"
	I1213 13:47:26.946037  745989 cri.go:89] found id: "6d6fd6c98d01d12e4674d5a2044ea8579a053244365f2f43c908c34dac570480"
	I1213 13:47:26.946041  745989 cri.go:89] found id: "334a9f2c1095a76f324f17afb6dae5685e1e8043861620865467bb49011fd8ea"
	I1213 13:47:26.946049  745989 cri.go:89] found id: "6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	I1213 13:47:26.946058  745989 cri.go:89] found id: "3204f1765ba08dd53b15129320ee6b079bc92ca458fd51509e14ccc8640a8ccc"
	I1213 13:47:26.946063  745989 cri.go:89] found id: ""
	I1213 13:47:26.946108  745989 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:26.957520  745989 retry.go:31] will retry after 164.128175ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:26Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:27.121902  745989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:27.135107  745989 pause.go:52] kubelet running: false
	I1213 13:47:27.135174  745989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:27.269681  745989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:27.269749  745989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:27.343618  745989 cri.go:89] found id: "0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375"
	I1213 13:47:27.343645  745989 cri.go:89] found id: "8405085ebe705ba7b423d0c2b5d4883fb997fcd33bd7a956ea769773c6341e34"
	I1213 13:47:27.343652  745989 cri.go:89] found id: "5795013750c24c2283403e4012508ffdb318fa62ff8d0f01376a3b277bfc99f8"
	I1213 13:47:27.343658  745989 cri.go:89] found id: "64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1"
	I1213 13:47:27.343676  745989 cri.go:89] found id: "ece6cabbfd4de4c0bf871f5e57ce5c4769621be9c162b8537aebfca43ac97e90"
	I1213 13:47:27.343682  745989 cri.go:89] found id: "e666bfd89f30f85cd8c1e8c64c04b77df4cb27f6c7df7838bdfaf6bf54d5ab00"
	I1213 13:47:27.343687  745989 cri.go:89] found id: "99cbb0e73d2197ad662dba2a00e0ec2f3ce53cd9276e552c1ca3a62cac601105"
	I1213 13:47:27.343692  745989 cri.go:89] found id: "6d6fd6c98d01d12e4674d5a2044ea8579a053244365f2f43c908c34dac570480"
	I1213 13:47:27.343697  745989 cri.go:89] found id: "334a9f2c1095a76f324f17afb6dae5685e1e8043861620865467bb49011fd8ea"
	I1213 13:47:27.343705  745989 cri.go:89] found id: "6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	I1213 13:47:27.343714  745989 cri.go:89] found id: "3204f1765ba08dd53b15129320ee6b079bc92ca458fd51509e14ccc8640a8ccc"
	I1213 13:47:27.343718  745989 cri.go:89] found id: ""
	I1213 13:47:27.343758  745989 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:27.355538  745989 retry.go:31] will retry after 291.503415ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:27Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:27.648095  745989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:27.660614  745989 pause.go:52] kubelet running: false
	I1213 13:47:27.660698  745989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:27.809843  745989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:27.809936  745989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:27.874482  745989 cri.go:89] found id: "0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375"
	I1213 13:47:27.874506  745989 cri.go:89] found id: "8405085ebe705ba7b423d0c2b5d4883fb997fcd33bd7a956ea769773c6341e34"
	I1213 13:47:27.874510  745989 cri.go:89] found id: "5795013750c24c2283403e4012508ffdb318fa62ff8d0f01376a3b277bfc99f8"
	I1213 13:47:27.874513  745989 cri.go:89] found id: "64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1"
	I1213 13:47:27.874517  745989 cri.go:89] found id: "ece6cabbfd4de4c0bf871f5e57ce5c4769621be9c162b8537aebfca43ac97e90"
	I1213 13:47:27.874520  745989 cri.go:89] found id: "e666bfd89f30f85cd8c1e8c64c04b77df4cb27f6c7df7838bdfaf6bf54d5ab00"
	I1213 13:47:27.874523  745989 cri.go:89] found id: "99cbb0e73d2197ad662dba2a00e0ec2f3ce53cd9276e552c1ca3a62cac601105"
	I1213 13:47:27.874525  745989 cri.go:89] found id: "6d6fd6c98d01d12e4674d5a2044ea8579a053244365f2f43c908c34dac570480"
	I1213 13:47:27.874528  745989 cri.go:89] found id: "334a9f2c1095a76f324f17afb6dae5685e1e8043861620865467bb49011fd8ea"
	I1213 13:47:27.874534  745989 cri.go:89] found id: "6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	I1213 13:47:27.874537  745989 cri.go:89] found id: "3204f1765ba08dd53b15129320ee6b079bc92ca458fd51509e14ccc8640a8ccc"
	I1213 13:47:27.874540  745989 cri.go:89] found id: ""
	I1213 13:47:27.874584  745989 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:27.886472  745989 retry.go:31] will retry after 625.798616ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:27Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:28.512975  745989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:47:28.524901  745989 pause.go:52] kubelet running: false
	I1213 13:47:28.524989  745989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 13:47:28.685136  745989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 13:47:28.685220  745989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 13:47:28.753742  745989 cri.go:89] found id: "0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375"
	I1213 13:47:28.753768  745989 cri.go:89] found id: "8405085ebe705ba7b423d0c2b5d4883fb997fcd33bd7a956ea769773c6341e34"
	I1213 13:47:28.753787  745989 cri.go:89] found id: "5795013750c24c2283403e4012508ffdb318fa62ff8d0f01376a3b277bfc99f8"
	I1213 13:47:28.753792  745989 cri.go:89] found id: "64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1"
	I1213 13:47:28.753797  745989 cri.go:89] found id: "ece6cabbfd4de4c0bf871f5e57ce5c4769621be9c162b8537aebfca43ac97e90"
	I1213 13:47:28.753802  745989 cri.go:89] found id: "e666bfd89f30f85cd8c1e8c64c04b77df4cb27f6c7df7838bdfaf6bf54d5ab00"
	I1213 13:47:28.753806  745989 cri.go:89] found id: "99cbb0e73d2197ad662dba2a00e0ec2f3ce53cd9276e552c1ca3a62cac601105"
	I1213 13:47:28.753810  745989 cri.go:89] found id: "6d6fd6c98d01d12e4674d5a2044ea8579a053244365f2f43c908c34dac570480"
	I1213 13:47:28.753814  745989 cri.go:89] found id: "334a9f2c1095a76f324f17afb6dae5685e1e8043861620865467bb49011fd8ea"
	I1213 13:47:28.753836  745989 cri.go:89] found id: "6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	I1213 13:47:28.753845  745989 cri.go:89] found id: "3204f1765ba08dd53b15129320ee6b079bc92ca458fd51509e14ccc8640a8ccc"
	I1213 13:47:28.753855  745989 cri.go:89] found id: ""
	I1213 13:47:28.753903  745989 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 13:47:28.768296  745989 out.go:203] 
	W1213 13:47:28.769378  745989 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 13:47:28.769400  745989 out.go:285] * 
	* 
	W1213 13:47:28.774897  745989 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 13:47:28.776168  745989 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-038239 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-038239
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-038239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da",
	        "Created": "2025-12-13T13:45:28.121473239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731169,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:46:30.662135516Z",
	            "FinishedAt": "2025-12-13T13:46:29.325549748Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/hostname",
	        "HostsPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/hosts",
	        "LogPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da-json.log",
	        "Name": "/default-k8s-diff-port-038239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-038239:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-038239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da",
	                "LowerDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-038239",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-038239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-038239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-038239",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-038239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2775ce49e3d7742efea109783b5f01f3210028bb6bb61f89e58842c5fc1256aa",
	            "SandboxKey": "/var/run/docker/netns/2775ce49e3d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-038239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "251672d224afb460f1a8362b4545aae5d977bdecd5cdddf5909169b2b5623ddc",
	                    "EndpointID": "74adb9b05fce784c0beecfa2e277b193d663a7d9fd1727ac491d4086d579f64b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6a:3e:27:77:8b:ad",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-038239",
	                        "284f8c641cab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239: exit status 2 (326.105948ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-038239 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-038239 logs -n 25: (1.154448383s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ stop    │ -p newest-cni-362964 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-362964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ newest-cni-362964 image list --format=json                                                                                                                                                                                                           │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p newest-cni-362964 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ image   │ default-k8s-diff-port-038239 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p default-k8s-diff-port-038239 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:47:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:47:15.396688  743793 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:15.396994  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397008  743793 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:15.397013  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397213  743793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:47:15.397667  743793 out.go:368] Setting JSON to false
	I1213 13:47:15.398880  743793 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8983,"bootTime":1765624652,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:47:15.398951  743793 start.go:143] virtualization: kvm guest
	I1213 13:47:15.401034  743793 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:47:15.402467  743793 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:47:15.402473  743793 notify.go:221] Checking for updates...
	I1213 13:47:15.404652  743793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:47:15.406062  743793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:15.407288  743793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:47:15.408413  743793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:47:15.409475  743793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:47:15.411121  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:15.411682  743793 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:47:15.438362  743793 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:47:15.438448  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.493074  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.482438278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.493183  743793 docker.go:319] overlay module found
	I1213 13:47:15.494725  743793 out.go:179] * Using the docker driver based on existing profile
	I1213 13:47:15.495689  743793 start.go:309] selected driver: docker
	I1213 13:47:15.495700  743793 start.go:927] validating driver "docker" against &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.495792  743793 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:47:15.496338  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.549301  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.539654496 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.549628  743793 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:15.549663  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:15.549714  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:15.549744  743793 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.551716  743793 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:47:15.552847  743793 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:47:15.553932  743793 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:47:15.555067  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:15.555097  743793 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:47:15.555109  743793 cache.go:65] Caching tarball of preloaded images
	I1213 13:47:15.555160  743793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:47:15.555218  743793 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:47:15.555231  743793 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:47:15.555336  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.575398  743793 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:47:15.575441  743793 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:47:15.575458  743793 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:47:15.575494  743793 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:47:15.575570  743793 start.go:364] duration metric: took 43.268µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:47:15.575593  743793 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:47:15.575602  743793 fix.go:54] fixHost starting: 
	I1213 13:47:15.575821  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.591746  743793 fix.go:112] recreateIfNeeded on newest-cni-362964: state=Stopped err=<nil>
	W1213 13:47:15.591771  743793 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:47:15.593293  743793 out.go:252] * Restarting existing docker container for "newest-cni-362964" ...
	I1213 13:47:15.593360  743793 cli_runner.go:164] Run: docker start newest-cni-362964
	I1213 13:47:15.823041  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.840403  743793 kic.go:430] container "newest-cni-362964" state is running.
	I1213 13:47:15.840799  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:15.859084  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.859284  743793 machine.go:94] provisionDockerMachine start ...
	I1213 13:47:15.859348  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:15.877682  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:15.878002  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:15.878021  743793 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:47:15.878655  743793 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47684->127.0.0.1:33520: read: connection reset by peer
	I1213 13:47:19.010243  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.010275  743793 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:47:19.010342  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.028649  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.028926  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.028944  743793 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:47:19.167849  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.167956  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.185955  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.186189  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.186207  743793 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:47:19.317047  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:47:19.317087  743793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:47:19.317117  743793 ubuntu.go:190] setting up certificates
	I1213 13:47:19.317129  743793 provision.go:84] configureAuth start
	I1213 13:47:19.317214  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:19.335703  743793 provision.go:143] copyHostCerts
	I1213 13:47:19.335786  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:47:19.335809  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:47:19.335895  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:47:19.336007  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:47:19.336019  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:47:19.336046  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:47:19.336102  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:47:19.336109  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:47:19.336133  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:47:19.336181  743793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:47:19.394450  743793 provision.go:177] copyRemoteCerts
	I1213 13:47:19.394502  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:47:19.394534  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.411750  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.507173  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:47:19.523766  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:47:19.539629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:47:19.555589  743793 provision.go:87] duration metric: took 238.436679ms to configureAuth
	I1213 13:47:19.555611  743793 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:47:19.555811  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:19.555940  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.574297  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.574507  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.574528  743793 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:47:19.858331  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:47:19.858358  743793 machine.go:97] duration metric: took 3.999058826s to provisionDockerMachine
	I1213 13:47:19.858370  743793 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:47:19.858383  743793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:47:19.858433  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:47:19.858474  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.876049  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.971385  743793 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:47:19.974791  743793 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:47:19.974823  743793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:47:19.974837  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:47:19.974893  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:47:19.974988  743793 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:47:19.975100  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:47:19.983356  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:20.000196  743793 start.go:296] duration metric: took 141.812887ms for postStartSetup
	I1213 13:47:20.000258  743793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:47:20.000314  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.017528  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.108567  743793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:47:20.112996  743793 fix.go:56] duration metric: took 4.537388049s for fixHost
	I1213 13:47:20.113024  743793 start.go:83] releasing machines lock for "newest-cni-362964", held for 4.537441376s
	I1213 13:47:20.113088  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:20.130726  743793 ssh_runner.go:195] Run: cat /version.json
	I1213 13:47:20.130793  743793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:47:20.130802  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.130878  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.148822  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.150135  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.295420  743793 ssh_runner.go:195] Run: systemctl --version
	I1213 13:47:20.301859  743793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:47:20.336602  743793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:47:20.341211  743793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:47:20.341287  743793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:47:20.349054  743793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:47:20.349073  743793 start.go:496] detecting cgroup driver to use...
	I1213 13:47:20.349107  743793 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:47:20.349163  743793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:47:20.362399  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:47:20.374254  743793 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:47:20.374306  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:47:20.388234  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:47:20.399676  743793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:47:20.474562  743793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:47:20.554145  743793 docker.go:234] disabling docker service ...
	I1213 13:47:20.554226  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:47:20.568676  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:47:20.580110  743793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:47:20.658535  743793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:47:20.737176  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:47:20.748856  743793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:47:20.762068  743793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:47:20.762129  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.770675  743793 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:47:20.770733  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.779036  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.786966  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.795024  743793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:47:20.802620  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.810583  743793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.818388  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.826379  743793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:47:20.833195  743793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:47:20.839891  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:20.916966  743793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:47:21.051005  743793 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:47:21.051061  743793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:47:21.054826  743793 start.go:564] Will wait 60s for crictl version
	I1213 13:47:21.054891  743793 ssh_runner.go:195] Run: which crictl
	I1213 13:47:21.058302  743793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:47:21.081279  743793 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:47:21.081361  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.110234  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.139180  743793 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:47:21.140352  743793 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:47:21.158817  743793 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:47:21.162980  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.174525  743793 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:47:21.175553  743793 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:47:21.175710  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:21.175761  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.210092  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.210114  743793 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:47:21.210160  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.234690  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.234711  743793 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:47:21.234719  743793 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:47:21.234845  743793 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:47:21.234912  743793 ssh_runner.go:195] Run: crio config
	I1213 13:47:21.282460  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:21.282487  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:21.282509  743793 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:47:21.282539  743793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:47:21.282708  743793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:47:21.282807  743793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:47:21.290750  743793 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:47:21.290846  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:47:21.298168  743793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:47:21.310228  743793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:47:21.322581  743793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:47:21.334293  743793 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:47:21.337735  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.347200  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:21.425075  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:21.445740  743793 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:47:21.445766  743793 certs.go:195] generating shared ca certs ...
	I1213 13:47:21.445805  743793 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:21.445974  743793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:47:21.446031  743793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:47:21.446043  743793 certs.go:257] generating profile certs ...
	I1213 13:47:21.446154  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:47:21.446224  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:47:21.446272  743793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:47:21.446406  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:47:21.446452  743793 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:47:21.446466  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:47:21.446502  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:47:21.446547  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:47:21.446593  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:47:21.446654  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:21.447541  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:47:21.465298  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:47:21.483440  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:47:21.502103  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:47:21.522629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:47:21.542814  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:47:21.559135  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:47:21.575971  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:47:21.591916  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:47:21.608394  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:47:21.624618  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:47:21.642224  743793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:47:21.654261  743793 ssh_runner.go:195] Run: openssl version
	I1213 13:47:21.660050  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.668116  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:47:21.675369  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679216  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679263  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.712864  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:47:21.720130  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.727050  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:47:21.733917  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737465  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737512  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.771088  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:47:21.778112  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.784916  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:47:21.791680  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.794961  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.795003  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.829333  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:47:21.836601  743793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:47:21.840092  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:47:21.873865  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:47:21.907577  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:47:21.942677  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:47:21.990730  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:47:22.038527  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:47:22.089275  743793 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:22.089396  743793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:47:22.089456  743793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:47:22.124825  743793 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:22.124858  743793 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:22.124862  743793 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:22.124866  743793 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:22.124869  743793 cri.go:89] found id: ""
	I1213 13:47:22.124908  743793 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:47:22.137341  743793 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:22Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:22.137416  743793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:47:22.145362  743793 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:47:22.145377  743793 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:47:22.145421  743793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:47:22.152664  743793 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:47:22.153211  743793 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-362964" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.153502  743793 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-390571/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-362964" cluster setting kubeconfig missing "newest-cni-362964" context setting]
	I1213 13:47:22.154092  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.155563  743793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:47:22.163308  743793 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 13:47:22.163339  743793 kubeadm.go:602] duration metric: took 17.955654ms to restartPrimaryControlPlane
	I1213 13:47:22.163350  743793 kubeadm.go:403] duration metric: took 74.090212ms to StartCluster
	I1213 13:47:22.163370  743793 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.163433  743793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.164305  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.164552  743793 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:47:22.164629  743793 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:47:22.164752  743793 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-362964"
	I1213 13:47:22.164768  743793 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-362964"
	W1213 13:47:22.164802  743793 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:47:22.164794  743793 addons.go:70] Setting dashboard=true in profile "newest-cni-362964"
	I1213 13:47:22.164827  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:22.164851  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.164849  743793 addons.go:70] Setting default-storageclass=true in profile "newest-cni-362964"
	I1213 13:47:22.164885  743793 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-362964"
	I1213 13:47:22.164831  743793 addons.go:239] Setting addon dashboard=true in "newest-cni-362964"
	W1213 13:47:22.164998  743793 addons.go:248] addon dashboard should already be in state true
	I1213 13:47:22.165031  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.165174  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165349  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165507  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.166701  743793 out.go:179] * Verifying Kubernetes components...
	I1213 13:47:22.167981  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:22.192293  743793 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 13:47:22.192607  743793 addons.go:239] Setting addon default-storageclass=true in "newest-cni-362964"
	W1213 13:47:22.192634  743793 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:47:22.192668  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.193205  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.193489  743793 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:47:22.194545  743793 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 13:47:22.194630  743793 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.194650  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:47:22.194717  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.197270  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:47:22.197289  743793 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:47:22.197336  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.223581  743793 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.223609  743793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:47:22.223725  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.235014  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.236249  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.249859  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.322026  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:22.335417  743793 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:47:22.335494  743793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:47:22.347746  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:47:22.347860  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:47:22.348903  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.348995  743793 api_server.go:72] duration metric: took 184.405092ms to wait for apiserver process to appear ...
	I1213 13:47:22.349020  743793 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:47:22.349038  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:22.357705  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.364407  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:47:22.364428  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:47:22.378144  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:47:22.378163  743793 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:47:22.392593  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:47:22.392619  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:47:22.406406  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:47:22.406430  743793 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:47:22.420459  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:47:22.420499  743793 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:47:22.432910  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:47:22.432934  743793 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:47:22.444815  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:47:22.444841  743793 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:47:22.458380  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:22.458404  743793 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:47:22.471198  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:23.763047  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.763087  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.763104  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.768125  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.768149  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.849412  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.855339  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:23.855368  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.312519  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.963582954s)
	I1213 13:47:24.312612  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.954874796s)
	I1213 13:47:24.312784  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.841538299s)
	I1213 13:47:24.314353  743793 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-362964 addons enable metrics-server
	
	I1213 13:47:24.323239  743793 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:47:24.324346  743793 addons.go:530] duration metric: took 2.159730307s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 13:47:24.349242  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.353865  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.353887  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.849405  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.854959  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.854986  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:25.349483  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:25.353727  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 13:47:25.354695  743793 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 13:47:25.354720  743793 api_server.go:131] duration metric: took 3.00569336s to wait for apiserver health ...
	I1213 13:47:25.354729  743793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:47:25.358250  743793 system_pods.go:59] 8 kube-system pods found
	I1213 13:47:25.358279  743793 system_pods.go:61] "coredns-7d764666f9-rqktl" [7c70d7d0-5139-4893-905c-0e183495035e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358287  743793 system_pods.go:61] "etcd-newest-cni-362964" [49d03570-d59e-4e95-902f-1994733e6009] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:47:25.358295  743793 system_pods.go:61] "kindnet-qk8dn" [0df822e7-da1c-43ee-9a1e-b2131ae84e50] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:47:25.358303  743793 system_pods.go:61] "kube-apiserver-newest-cni-362964" [31c7799d-0188-4e2f-8d32-eb6e3ffe29ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:47:25.358315  743793 system_pods.go:61] "kube-controller-manager-newest-cni-362964" [cee82184-0e71-4dfb-8851-d642f2716578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:47:25.358325  743793 system_pods.go:61] "kube-proxy-97cpx" [c081628a-7cdd-4b8c-9d28-9d95707c6064] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:47:25.358338  743793 system_pods.go:61] "kube-scheduler-newest-cni-362964" [d160f41f-e904-4d11-9b2c-157bfcbc668f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:47:25.358346  743793 system_pods.go:61] "storage-provisioner" [b6d4689e-b3f1-496d-bfd4-11cb93ea7c15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358352  743793 system_pods.go:74] duration metric: took 3.617333ms to wait for pod list to return data ...
	I1213 13:47:25.358359  743793 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:47:25.360577  743793 default_sa.go:45] found service account: "default"
	I1213 13:47:25.360597  743793 default_sa.go:55] duration metric: took 2.231432ms for default service account to be created ...
	I1213 13:47:25.360614  743793 kubeadm.go:587] duration metric: took 3.196023464s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:25.360633  743793 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:47:25.362709  743793 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:47:25.362732  743793 node_conditions.go:123] node cpu capacity is 8
	I1213 13:47:25.362750  743793 node_conditions.go:105] duration metric: took 2.111782ms to run NodePressure ...
	I1213 13:47:25.362764  743793 start.go:242] waiting for startup goroutines ...
	I1213 13:47:25.362789  743793 start.go:247] waiting for cluster config update ...
	I1213 13:47:25.362806  743793 start.go:256] writing updated cluster config ...
	I1213 13:47:25.363125  743793 ssh_runner.go:195] Run: rm -f paused
	I1213 13:47:25.410985  743793 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:47:25.412352  743793 out.go:179] * Done! kubectl is now configured to use "newest-cni-362964" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:46:50 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:46:50.863164282Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 13:46:50 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:46:50.867100804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 13:46:50 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:46:50.867125176Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.98024766Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4610e88f-91b8-4dc9-bfd5-d71b1801702c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.981346573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9537215a-a158-4e9a-aa82-87beb80a4c57 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.982599554Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper" id=43f94380-e3da-429d-b068-75c319a9ee33 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.982749307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.989174462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.989669059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.020888409Z" level=info msg="Created container 6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper" id=43f94380-e3da-429d-b068-75c319a9ee33 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.021503833Z" level=info msg="Starting container: 6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573" id=538585fe-32bc-49ec-8b44-764a4153501a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.023727608Z" level=info msg="Started container" PID=1784 containerID=6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper id=538585fe-32bc-49ec-8b44-764a4153501a name=/runtime.v1.RuntimeService/StartContainer sandboxID=349937d79c24c3bb7d19489d270c33ef31d471dc751ac83533a226980e443e27
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.11032268Z" level=info msg="Removing container: a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870" id=9451087c-3c18-4360-865a-4bcefb1946a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.121023525Z" level=info msg="Removed container a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper" id=9451087c-3c18-4360-865a-4bcefb1946a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.121453372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=412dee1d-b6c8-46cc-8744-ea392cbfd157 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.122416899Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=34befef1-20f2-4e57-8344-aa13e0e56471 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.12353311Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b006d57f-6378-4c61-a9d0-bb80a7e342d4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.123663491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129079852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129264119Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/88b09ad1ac82ed5b9bf69f5ed50d34e0d61addc0711327d360ecfc36d1659494/merged/etc/passwd: no such file or directory"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129296629Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/88b09ad1ac82ed5b9bf69f5ed50d34e0d61addc0711327d360ecfc36d1659494/merged/etc/group: no such file or directory"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129541779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.157466171Z" level=info msg="Created container 0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375: kube-system/storage-provisioner/storage-provisioner" id=b006d57f-6378-4c61-a9d0-bb80a7e342d4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.15811689Z" level=info msg="Starting container: 0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375" id=f700a082-6b66-4b1b-9ac1-808571e1bf58 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.159919834Z" level=info msg="Started container" PID=1798 containerID=0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375 description=kube-system/storage-provisioner/storage-provisioner id=f700a082-6b66-4b1b-9ac1-808571e1bf58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d6b1eb91c70a83a8f6203ab9ddae87ab7fe7f0db76615db07127be13c4488a1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	0c83493337f40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   8d6b1eb91c70a       storage-provisioner                                    kube-system
	6ded9f03b4daa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   349937d79c24c       dashboard-metrics-scraper-6ffb444bf9-8v9dp             kubernetes-dashboard
	3204f1765ba08       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   cb79ef9143023       kubernetes-dashboard-855c9754f9-zlkps                  kubernetes-dashboard
	b7f6d2e12ce0e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   50a1820e62e13       busybox                                                default
	8405085ebe705       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   5fbd5ced2aea6       coredns-66bc5c9577-tzzmx                               kube-system
	5795013750c24       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   eb58de6f92ffc       kube-proxy-lzwfg                                       kube-system
	64ba07c204a50       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   8d6b1eb91c70a       storage-provisioner                                    kube-system
	ece6cabbfd4de       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   0db3f4ba8580b       kindnet-c65rs                                          kube-system
	e666bfd89f30f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           52 seconds ago      Running             kube-apiserver              0                   9ba8b9c325cd2       kube-apiserver-default-k8s-diff-port-038239            kube-system
	99cbb0e73d219       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           52 seconds ago      Running             kube-scheduler              0                   e27423a7cb39a       kube-scheduler-default-k8s-diff-port-038239            kube-system
	6d6fd6c98d01d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   d79667ee7685c       etcd-default-k8s-diff-port-038239                      kube-system
	334a9f2c1095a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           52 seconds ago      Running             kube-controller-manager     0                   4f65e0aceb6fa       kube-controller-manager-default-k8s-diff-port-038239   kube-system
	
	
	==> coredns [8405085ebe705ba7b423d0c2b5d4883fb997fcd33bd7a956ea769773c6341e34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60624 - 60979 "HINFO IN 7938494452543560986.1464981115743233958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.487865899s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-038239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-038239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=default-k8s-diff-port-038239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_45_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:45:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-038239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:47:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-038239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                411d424d-b720-4c82-b27f-51e7954655e7
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-tzzmx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-default-k8s-diff-port-038239                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-c65rs                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-default-k8s-diff-port-038239             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-038239    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-lzwfg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-default-k8s-diff-port-038239             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8v9dp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zlkps                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node default-k8s-diff-port-038239 event: Registered Node default-k8s-diff-port-038239 in Controller
	  Normal  NodeReady                90s                kubelet          Node default-k8s-diff-port-038239 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)  kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-038239 event: Registered Node default-k8s-diff-port-038239 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [6d6fd6c98d01d12e4674d5a2044ea8579a053244365f2f43c908c34dac570480] <==
	{"level":"warn","ts":"2025-12-13T13:46:38.888377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.893659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.902150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.912185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.921296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.931187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.939887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.947739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.955935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.965789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.974892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.988706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.998178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:39.010248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:39.070542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56256","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:46:43.311187Z","caller":"traceutil/trace.go:172","msg":"trace[71490223] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"148.139496ms","start":"2025-12-13T13:46:43.163019Z","end":"2025-12-13T13:46:43.311158Z","steps":["trace[71490223] 'process raft request'  (duration: 145.504639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.311339Z","caller":"traceutil/trace.go:172","msg":"trace[1273882751] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"148.227558ms","start":"2025-12-13T13:46:43.163090Z","end":"2025-12-13T13:46:43.311318Z","steps":["trace[1273882751] 'process raft request'  (duration: 148.010391ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.604576Z","caller":"traceutil/trace.go:172","msg":"trace[1210521705] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"141.354245ms","start":"2025-12-13T13:46:43.463200Z","end":"2025-12-13T13:46:43.604554Z","steps":["trace[1210521705] 'process raft request'  (duration: 137.365038ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.730152Z","caller":"traceutil/trace.go:172","msg":"trace[1699595448] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:513; }","duration":"118.933405ms","start":"2025-12-13T13:46:43.611192Z","end":"2025-12-13T13:46:43.730125Z","steps":["trace[1699595448] 'read index received'  (duration: 118.924588ms)","trace[1699595448] 'applied index is now lower than readState.Index'  (duration: 7.71µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:46:43.759024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.805775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2025-12-13T13:46:43.759119Z","caller":"traceutil/trace.go:172","msg":"trace[1970472961] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:482; }","duration":"147.922959ms","start":"2025-12-13T13:46:43.611181Z","end":"2025-12-13T13:46:43.759104Z","steps":["trace[1970472961] 'agreement among raft nodes before linearized reading'  (duration: 119.041307ms)","trace[1970472961] 'range keys from in-memory index tree'  (duration: 28.664266ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:46:43.759106Z","caller":"traceutil/trace.go:172","msg":"trace[387163409] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"149.556472ms","start":"2025-12-13T13:46:43.609519Z","end":"2025-12-13T13:46:43.759076Z","steps":["trace[387163409] 'process raft request'  (duration: 120.698651ms)","trace[387163409] 'compare'  (duration: 28.725338ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:46:43.778897Z","caller":"traceutil/trace.go:172","msg":"trace[1783498315] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"168.523071ms","start":"2025-12-13T13:46:43.610357Z","end":"2025-12-13T13:46:43.778880Z","steps":["trace[1783498315] 'process raft request'  (duration: 168.305286ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.778916Z","caller":"traceutil/trace.go:172","msg":"trace[52909227] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"168.397125ms","start":"2025-12-13T13:46:43.610496Z","end":"2025-12-13T13:46:43.778893Z","steps":["trace[52909227] 'process raft request'  (duration: 168.261769ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.778901Z","caller":"traceutil/trace.go:172","msg":"trace[1476993380] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"166.535644ms","start":"2025-12-13T13:46:43.612346Z","end":"2025-12-13T13:46:43.778882Z","steps":["trace[1476993380] 'process raft request'  (duration: 166.482687ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:47:29 up  2:29,  0 user,  load average: 4.14, 4.11, 2.73
	Linux default-k8s-diff-port-038239 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ece6cabbfd4de4c0bf871f5e57ce5c4769621be9c162b8537aebfca43ac97e90] <==
	I1213 13:46:40.590597       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:46:40.590982       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 13:46:40.591241       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:46:40.591266       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:46:40.591279       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:46:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:46:40.807925       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:46:40.889848       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:46:40.890985       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:46:40.891141       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:46:41.291067       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:46:41.291093       1 metrics.go:72] Registering metrics
	I1213 13:46:41.291144       1 controller.go:711] "Syncing nftables rules"
	I1213 13:46:50.807737       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:46:50.807823       1 main.go:301] handling current node
	I1213 13:47:00.808428       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:00.808477       1 main.go:301] handling current node
	I1213 13:47:10.808641       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:10.808675       1 main.go:301] handling current node
	I1213 13:47:20.811883       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:20.811913       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e666bfd89f30f85cd8c1e8c64c04b77df4cb27f6c7df7838bdfaf6bf54d5ab00] <==
	I1213 13:46:39.682271       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 13:46:39.682299       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:46:39.683770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:46:39.683801       1 aggregator.go:171] initial CRD sync complete...
	I1213 13:46:39.683810       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:46:39.683824       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:46:39.683830       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:46:39.682286       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 13:46:39.688395       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:46:39.691833       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:46:39.693912       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 13:46:39.693973       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:46:39.717604       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:46:40.004174       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:46:40.005335       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:46:40.046321       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:46:40.071680       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:46:40.080126       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:46:40.122296       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.236.104"}
	I1213 13:46:40.132695       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.88.246"}
	I1213 13:46:40.575666       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:46:43.162468       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:43.162517       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:43.362181       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:46:43.462699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [334a9f2c1095a76f324f17afb6dae5685e1e8043861620865467bb49011fd8ea] <==
	I1213 13:46:42.884565       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 13:46:42.885830       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 13:46:42.888064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:46:42.890305       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:46:42.908161       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:46:42.909413       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 13:46:42.909434       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:46:42.909472       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 13:46:42.909804       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 13:46:42.911054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 13:46:42.912250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:46:42.914392       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 13:46:42.917670       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 13:46:42.917819       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 13:46:42.917859       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 13:46:42.917869       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 13:46:42.917875       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 13:46:42.918004       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 13:46:42.922940       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 13:46:42.924718       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 13:46:43.069456       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 13:46:43.108240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:46:43.108258       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:46:43.108263       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:46:43.169735       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5795013750c24c2283403e4012508ffdb318fa62ff8d0f01376a3b277bfc99f8] <==
	I1213 13:46:40.377002       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:46:40.479637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:46:40.580029       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:46:40.580079       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 13:46:40.580185       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:46:40.605924       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:46:40.605996       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:46:40.612404       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:46:40.613079       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:46:40.613129       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:40.615001       1 config.go:200] "Starting service config controller"
	I1213 13:46:40.615019       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:46:40.615031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:46:40.615031       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:46:40.615114       1 config.go:309] "Starting node config controller"
	I1213 13:46:40.615122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:46:40.615131       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:46:40.615005       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:46:40.615750       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:46:40.715253       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:46:40.715265       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:46:40.715984       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [99cbb0e73d2197ad662dba2a00e0ec2f3ce53cd9276e552c1ca3a62cac601105] <==
	I1213 13:46:39.032275       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:46:39.661885       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 13:46:39.662011       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:39.668575       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 13:46:39.668616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:39.668650       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:39.668625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:46:39.668719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:46:39.669044       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:46:39.668648       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 13:46:39.669306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:46:39.769085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:39.769159       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 13:46:39.769086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:46:42 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:42.817020     734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.928974     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1875c354-6bf3-4786-b35a-cac99170722a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zlkps\" (UID: \"1875c354-6bf3-4786-b35a-cac99170722a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlkps"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.929054     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsmp\" (UniqueName: \"kubernetes.io/projected/988b8622-06ce-4f75-97da-b7867be34de6-kube-api-access-xqsmp\") pod \"dashboard-metrics-scraper-6ffb444bf9-8v9dp\" (UID: \"988b8622-06ce-4f75-97da-b7867be34de6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.929189     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tvs\" (UniqueName: \"kubernetes.io/projected/1875c354-6bf3-4786-b35a-cac99170722a-kube-api-access-x8tvs\") pod \"kubernetes-dashboard-855c9754f9-zlkps\" (UID: \"1875c354-6bf3-4786-b35a-cac99170722a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlkps"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.929258     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/988b8622-06ce-4f75-97da-b7867be34de6-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8v9dp\" (UID: \"988b8622-06ce-4f75-97da-b7867be34de6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp"
	Dec 13 13:46:47 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:47.048251     734 scope.go:117] "RemoveContainer" containerID="6995a475a06e84b21528339a5da0d0945da4371c01d87060f6ae9d2a160e16ce"
	Dec 13 13:46:48 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:48.053638     734 scope.go:117] "RemoveContainer" containerID="6995a475a06e84b21528339a5da0d0945da4371c01d87060f6ae9d2a160e16ce"
	Dec 13 13:46:48 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:48.054022     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:46:48 default-k8s-diff-port-038239 kubelet[734]: E1213 13:46:48.054210     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:46:49 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:49.058689     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:46:49 default-k8s-diff-port-038239 kubelet[734]: E1213 13:46:49.058903     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:46:50 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:50.073237     734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlkps" podStartSLOduration=1.531378751 podStartE2EDuration="7.073211577s" podCreationTimestamp="2025-12-13 13:46:43 +0000 UTC" firstStartedPulling="2025-12-13 13:46:44.149770519 +0000 UTC m=+7.265661114" lastFinishedPulling="2025-12-13 13:46:49.691603333 +0000 UTC m=+12.807493940" observedRunningTime="2025-12-13 13:46:50.072878747 +0000 UTC m=+13.188769363" watchObservedRunningTime="2025-12-13 13:46:50.073211577 +0000 UTC m=+13.189102194"
	Dec 13 13:46:55 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:55.413466     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:46:55 default-k8s-diff-port-038239 kubelet[734]: E1213 13:46:55.413723     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:47:06 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:06.979761     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:47:07 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:07.108542     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:47:07 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:07.108891     734 scope.go:117] "RemoveContainer" containerID="6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	Dec 13 13:47:07 default-k8s-diff-port-038239 kubelet[734]: E1213 13:47:07.109139     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:47:11 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:11.121055     734 scope.go:117] "RemoveContainer" containerID="64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1"
	Dec 13 13:47:15 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:15.413418     734 scope.go:117] "RemoveContainer" containerID="6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	Dec 13 13:47:15 default-k8s-diff-port-038239 kubelet[734]: E1213 13:47:15.413598     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: kubelet.service: Consumed 1.643s CPU time.
	
	
	==> kubernetes-dashboard [3204f1765ba08dd53b15129320ee6b079bc92ca458fd51509e14ccc8640a8ccc] <==
	2025/12/13 13:46:49 Starting overwatch
	2025/12/13 13:46:49 Using namespace: kubernetes-dashboard
	2025/12/13 13:46:49 Using in-cluster config to connect to apiserver
	2025/12/13 13:46:49 Using secret token for csrf signing
	2025/12/13 13:46:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:46:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:46:49 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 13:46:49 Generating JWE encryption key
	2025/12/13 13:46:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:46:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:46:49 Initializing JWE encryption key from synchronized object
	2025/12/13 13:46:49 Creating in-cluster Sidecar client
	2025/12/13 13:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:49 Serving insecurely on HTTP port: 9090
	2025/12/13 13:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375] <==
	I1213 13:47:11.171813       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:47:11.179223       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:47:11.179267       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:47:11.181264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:14.636845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:18.897077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:22.495877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:25.550178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:28.573344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:28.578385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:28.578629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:47:28.578818       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"164d55c5-fd3d-4e0d-b772-31680a1bef78", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-038239_c7db4044-176f-475f-89f1-6cbf9a73e0e5 became leader
	I1213 13:47:28.578869       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038239_c7db4044-176f-475f-89f1-6cbf9a73e0e5!
	W1213 13:47:28.581080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:28.584973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:28.679105       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038239_c7db4044-176f-475f-89f1-6cbf9a73e0e5!
	
	
	==> storage-provisioner [64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1] <==
	I1213 13:46:40.342758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:47:10.347408       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239: exit status 2 (352.563496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-038239
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-038239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da",
	        "Created": "2025-12-13T13:45:28.121473239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731169,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T13:46:30.662135516Z",
	            "FinishedAt": "2025-12-13T13:46:29.325549748Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/hostname",
	        "HostsPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/hosts",
	        "LogPath": "/var/lib/docker/containers/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da/284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da-json.log",
	        "Name": "/default-k8s-diff-port-038239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-038239:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-038239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "284f8c641cab5a0fbe10d63636e1daa6c38652a3d8e4ed0d0d00ddebf73de3da",
	                "LowerDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1-init/diff:/var/lib/docker/overlay2/2ab30f867418f233812f5ff754587aaeab7569a5579dc6a5c99873a35cf81eb6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f326094624426fad6e6847f8117422b0fa3770373cf2b7510f46843322aed1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-038239",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-038239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-038239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-038239",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-038239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2775ce49e3d7742efea109783b5f01f3210028bb6bb61f89e58842c5fc1256aa",
	            "SandboxKey": "/var/run/docker/netns/2775ce49e3d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-038239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "251672d224afb460f1a8362b4545aae5d977bdecd5cdddf5909169b2b5623ddc",
	                    "EndpointID": "74adb9b05fce784c0beecfa2e277b193d663a7d9fd1727ac491d4086d579f64b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6a:3e:27:77:8b:ad",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-038239",
	                        "284f8c641cab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239: exit status 2 (336.876974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-038239 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-038239 logs -n 25: (1.139644439s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-038239 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-038239 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ old-k8s-version-417583 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p old-k8s-version-417583 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p old-k8s-version-417583                                                                                                                                                                                                                            │ old-k8s-version-417583       │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ no-preload-992258 image list --format=json                                                                                                                                                                                                           │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p no-preload-992258 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ image   │ embed-certs-973953 image list --format=json                                                                                                                                                                                                          │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ pause   │ -p embed-certs-973953 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │                     │
	│ delete  │ -p no-preload-992258                                                                                                                                                                                                                                 │ no-preload-992258            │ jenkins │ v1.37.0 │ 13 Dec 25 13:46 UTC │ 13 Dec 25 13:46 UTC │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-362964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ delete  │ -p embed-certs-973953                                                                                                                                                                                                                                │ embed-certs-973953           │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ stop    │ -p newest-cni-362964 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-362964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ start   │ -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ image   │ newest-cni-362964 image list --format=json                                                                                                                                                                                                           │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p newest-cni-362964 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-362964            │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	│ image   │ default-k8s-diff-port-038239 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │ 13 Dec 25 13:47 UTC │
	│ pause   │ -p default-k8s-diff-port-038239 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-038239 │ jenkins │ v1.37.0 │ 13 Dec 25 13:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:47:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:47:15.396688  743793 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:47:15.396994  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397008  743793 out.go:374] Setting ErrFile to fd 2...
	I1213 13:47:15.397013  743793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:47:15.397213  743793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:47:15.397667  743793 out.go:368] Setting JSON to false
	I1213 13:47:15.398880  743793 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8983,"bootTime":1765624652,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:47:15.398951  743793 start.go:143] virtualization: kvm guest
	I1213 13:47:15.401034  743793 out.go:179] * [newest-cni-362964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:47:15.402467  743793 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:47:15.402473  743793 notify.go:221] Checking for updates...
	I1213 13:47:15.404652  743793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:47:15.406062  743793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:15.407288  743793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:47:15.408413  743793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:47:15.409475  743793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:47:15.411121  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:15.411682  743793 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:47:15.438362  743793 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:47:15.438448  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.493074  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.482438278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.493183  743793 docker.go:319] overlay module found
	I1213 13:47:15.494725  743793 out.go:179] * Using the docker driver based on existing profile
	I1213 13:47:15.495689  743793 start.go:309] selected driver: docker
	I1213 13:47:15.495700  743793 start.go:927] validating driver "docker" against &{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.495792  743793 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:47:15.496338  743793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:47:15.549301  743793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 13:47:15.539654496 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:47:15.549628  743793 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:15.549663  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:15.549714  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:15.549744  743793 start.go:353] cluster config:
	{Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:15.551716  743793 out.go:179] * Starting "newest-cni-362964" primary control-plane node in "newest-cni-362964" cluster
	I1213 13:47:15.552847  743793 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 13:47:15.553932  743793 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 13:47:15.555067  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:15.555097  743793 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 13:47:15.555109  743793 cache.go:65] Caching tarball of preloaded images
	I1213 13:47:15.555160  743793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 13:47:15.555218  743793 preload.go:238] Found /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 13:47:15.555231  743793 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 13:47:15.555336  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.575398  743793 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 13:47:15.575441  743793 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 13:47:15.575458  743793 cache.go:243] Successfully downloaded all kic artifacts
	I1213 13:47:15.575494  743793 start.go:360] acquireMachinesLock for newest-cni-362964: {Name:mk61572d281c54a6e0670409b0733cc12a8d00e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 13:47:15.575570  743793 start.go:364] duration metric: took 43.268µs to acquireMachinesLock for "newest-cni-362964"
	I1213 13:47:15.575593  743793 start.go:96] Skipping create...Using existing machine configuration
	I1213 13:47:15.575602  743793 fix.go:54] fixHost starting: 
	I1213 13:47:15.575821  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.591746  743793 fix.go:112] recreateIfNeeded on newest-cni-362964: state=Stopped err=<nil>
	W1213 13:47:15.591771  743793 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 13:47:15.593293  743793 out.go:252] * Restarting existing docker container for "newest-cni-362964" ...
	I1213 13:47:15.593360  743793 cli_runner.go:164] Run: docker start newest-cni-362964
	I1213 13:47:15.823041  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:15.840403  743793 kic.go:430] container "newest-cni-362964" state is running.
	I1213 13:47:15.840799  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:15.859084  743793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/config.json ...
	I1213 13:47:15.859284  743793 machine.go:94] provisionDockerMachine start ...
	I1213 13:47:15.859348  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:15.877682  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:15.878002  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:15.878021  743793 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 13:47:15.878655  743793 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47684->127.0.0.1:33520: read: connection reset by peer
	I1213 13:47:19.010243  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.010275  743793 ubuntu.go:182] provisioning hostname "newest-cni-362964"
	I1213 13:47:19.010342  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.028649  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.028926  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.028944  743793 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-362964 && echo "newest-cni-362964" | sudo tee /etc/hostname
	I1213 13:47:19.167849  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-362964
	
	I1213 13:47:19.167956  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.185955  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.186189  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.186207  743793 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-362964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-362964/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-362964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 13:47:19.317047  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 13:47:19.317087  743793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-390571/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-390571/.minikube}
	I1213 13:47:19.317117  743793 ubuntu.go:190] setting up certificates
	I1213 13:47:19.317129  743793 provision.go:84] configureAuth start
	I1213 13:47:19.317214  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:19.335703  743793 provision.go:143] copyHostCerts
	I1213 13:47:19.335786  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem, removing ...
	I1213 13:47:19.335809  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem
	I1213 13:47:19.335895  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/ca.pem (1078 bytes)
	I1213 13:47:19.336007  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem, removing ...
	I1213 13:47:19.336019  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem
	I1213 13:47:19.336046  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/cert.pem (1123 bytes)
	I1213 13:47:19.336102  743793 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem, removing ...
	I1213 13:47:19.336109  743793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem
	I1213 13:47:19.336133  743793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-390571/.minikube/key.pem (1679 bytes)
	I1213 13:47:19.336181  743793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem org=jenkins.newest-cni-362964 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-362964]
	I1213 13:47:19.394450  743793 provision.go:177] copyRemoteCerts
	I1213 13:47:19.394502  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 13:47:19.394534  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.411750  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.507173  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 13:47:19.523766  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 13:47:19.539629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 13:47:19.555589  743793 provision.go:87] duration metric: took 238.436679ms to configureAuth
	I1213 13:47:19.555611  743793 ubuntu.go:206] setting minikube options for container-runtime
	I1213 13:47:19.555811  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:19.555940  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.574297  743793 main.go:143] libmachine: Using SSH client type: native
	I1213 13:47:19.574507  743793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33520 <nil> <nil>}
	I1213 13:47:19.574528  743793 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 13:47:19.858331  743793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 13:47:19.858358  743793 machine.go:97] duration metric: took 3.999058826s to provisionDockerMachine
	I1213 13:47:19.858370  743793 start.go:293] postStartSetup for "newest-cni-362964" (driver="docker")
	I1213 13:47:19.858383  743793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 13:47:19.858433  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 13:47:19.858474  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:19.876049  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:19.971385  743793 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 13:47:19.974791  743793 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 13:47:19.974823  743793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 13:47:19.974837  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/addons for local assets ...
	I1213 13:47:19.974893  743793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-390571/.minikube/files for local assets ...
	I1213 13:47:19.974988  743793 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem -> 3941302.pem in /etc/ssl/certs
	I1213 13:47:19.975100  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 13:47:19.983356  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:20.000196  743793 start.go:296] duration metric: took 141.812887ms for postStartSetup
	I1213 13:47:20.000258  743793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:47:20.000314  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.017528  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.108567  743793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 13:47:20.112996  743793 fix.go:56] duration metric: took 4.537388049s for fixHost
	I1213 13:47:20.113024  743793 start.go:83] releasing machines lock for "newest-cni-362964", held for 4.537441376s
	I1213 13:47:20.113088  743793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-362964
	I1213 13:47:20.130726  743793 ssh_runner.go:195] Run: cat /version.json
	I1213 13:47:20.130793  743793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 13:47:20.130802  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.130878  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:20.148822  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.150135  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:20.295420  743793 ssh_runner.go:195] Run: systemctl --version
	I1213 13:47:20.301859  743793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 13:47:20.336602  743793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 13:47:20.341211  743793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 13:47:20.341287  743793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 13:47:20.349054  743793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 13:47:20.349073  743793 start.go:496] detecting cgroup driver to use...
	I1213 13:47:20.349107  743793 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 13:47:20.349163  743793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 13:47:20.362399  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 13:47:20.374254  743793 docker.go:218] disabling cri-docker service (if available) ...
	I1213 13:47:20.374306  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 13:47:20.388234  743793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 13:47:20.399676  743793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 13:47:20.474562  743793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 13:47:20.554145  743793 docker.go:234] disabling docker service ...
	I1213 13:47:20.554226  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 13:47:20.568676  743793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 13:47:20.580110  743793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 13:47:20.658535  743793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 13:47:20.737176  743793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 13:47:20.748856  743793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 13:47:20.762068  743793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 13:47:20.762129  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.770675  743793 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 13:47:20.770733  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.779036  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.786966  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.795024  743793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 13:47:20.802620  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.810583  743793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.818388  743793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 13:47:20.826379  743793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 13:47:20.833195  743793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 13:47:20.839891  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:20.916966  743793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 13:47:21.051005  743793 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 13:47:21.051061  743793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 13:47:21.054826  743793 start.go:564] Will wait 60s for crictl version
	I1213 13:47:21.054891  743793 ssh_runner.go:195] Run: which crictl
	I1213 13:47:21.058302  743793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 13:47:21.081279  743793 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 13:47:21.081361  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.110234  743793 ssh_runner.go:195] Run: crio --version
	I1213 13:47:21.139180  743793 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 13:47:21.140352  743793 cli_runner.go:164] Run: docker network inspect newest-cni-362964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 13:47:21.158817  743793 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 13:47:21.162980  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.174525  743793 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 13:47:21.175553  743793 kubeadm.go:884] updating cluster {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 13:47:21.175710  743793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 13:47:21.175761  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.210092  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.210114  743793 crio.go:433] Images already preloaded, skipping extraction
	I1213 13:47:21.210160  743793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 13:47:21.234690  743793 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 13:47:21.234711  743793 cache_images.go:86] Images are preloaded, skipping loading
	I1213 13:47:21.234719  743793 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 13:47:21.234845  743793 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-362964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 13:47:21.234912  743793 ssh_runner.go:195] Run: crio config
	I1213 13:47:21.282460  743793 cni.go:84] Creating CNI manager for ""
	I1213 13:47:21.282487  743793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 13:47:21.282509  743793 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 13:47:21.282539  743793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-362964 NodeName:newest-cni-362964 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 13:47:21.282708  743793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-362964"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 13:47:21.282807  743793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 13:47:21.290750  743793 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 13:47:21.290846  743793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 13:47:21.298168  743793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 13:47:21.310228  743793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 13:47:21.322581  743793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 13:47:21.334293  743793 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 13:47:21.337735  743793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 13:47:21.347200  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:21.425075  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:21.445740  743793 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964 for IP: 192.168.76.2
	I1213 13:47:21.445766  743793 certs.go:195] generating shared ca certs ...
	I1213 13:47:21.445805  743793 certs.go:227] acquiring lock for ca certs: {Name:mkb6963f3134ffd486c672ddb3a967e56122d5d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:21.445974  743793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key
	I1213 13:47:21.446031  743793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key
	I1213 13:47:21.446043  743793 certs.go:257] generating profile certs ...
	I1213 13:47:21.446154  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/client.key
	I1213 13:47:21.446224  743793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key.a735fadb
	I1213 13:47:21.446272  743793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key
	I1213 13:47:21.446406  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem (1338 bytes)
	W1213 13:47:21.446452  743793 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130_empty.pem, impossibly tiny 0 bytes
	I1213 13:47:21.446466  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 13:47:21.446502  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/ca.pem (1078 bytes)
	I1213 13:47:21.446547  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/cert.pem (1123 bytes)
	I1213 13:47:21.446593  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/certs/key.pem (1679 bytes)
	I1213 13:47:21.446654  743793 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem (1708 bytes)
	I1213 13:47:21.447541  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 13:47:21.465298  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 13:47:21.483440  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 13:47:21.502103  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 13:47:21.522629  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 13:47:21.542814  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 13:47:21.559135  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 13:47:21.575971  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/newest-cni-362964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 13:47:21.591916  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/certs/394130.pem --> /usr/share/ca-certificates/394130.pem (1338 bytes)
	I1213 13:47:21.608394  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/ssl/certs/3941302.pem --> /usr/share/ca-certificates/3941302.pem (1708 bytes)
	I1213 13:47:21.624618  743793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 13:47:21.642224  743793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 13:47:21.654261  743793 ssh_runner.go:195] Run: openssl version
	I1213 13:47:21.660050  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.668116  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/394130.pem /etc/ssl/certs/394130.pem
	I1213 13:47:21.675369  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679216  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 13:13 /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.679263  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/394130.pem
	I1213 13:47:21.712864  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 13:47:21.720130  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.727050  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3941302.pem /etc/ssl/certs/3941302.pem
	I1213 13:47:21.733917  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737465  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 13:13 /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.737512  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3941302.pem
	I1213 13:47:21.771088  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 13:47:21.778112  743793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.784916  743793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 13:47:21.791680  743793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.794961  743793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:05 /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.795003  743793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 13:47:21.829333  743793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 13:47:21.836601  743793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 13:47:21.840092  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 13:47:21.873865  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 13:47:21.907577  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 13:47:21.942677  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 13:47:21.990730  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 13:47:22.038527  743793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 13:47:22.089275  743793 kubeadm.go:401] StartCluster: {Name:newest-cni-362964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-362964 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:47:22.089396  743793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 13:47:22.089456  743793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 13:47:22.124825  743793 cri.go:89] found id: "5da4caf87e21e2c24c49feccc728cddb718f645b6ea4db87e0bf78cf3c81e434"
	I1213 13:47:22.124858  743793 cri.go:89] found id: "bf5900d175c7d2696a8f8d812ce80bb83b78d5e729b180a06ffe24fd4380248b"
	I1213 13:47:22.124862  743793 cri.go:89] found id: "110b53112a1a28576070f2a2242056e28359eefcce484ce9f1badc19b9aa9fe0"
	I1213 13:47:22.124866  743793 cri.go:89] found id: "467a2e1a14516b194138faf28743f2e31cc6c2c67e3a2b45354fa6c0ff15d609"
	I1213 13:47:22.124869  743793 cri.go:89] found id: ""
	I1213 13:47:22.124908  743793 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 13:47:22.137341  743793 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:47:22Z" level=error msg="open /run/runc: no such file or directory"
	I1213 13:47:22.137416  743793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 13:47:22.145362  743793 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 13:47:22.145377  743793 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 13:47:22.145421  743793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 13:47:22.152664  743793 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:47:22.153211  743793 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-362964" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.153502  743793 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-390571/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-362964" cluster setting kubeconfig missing "newest-cni-362964" context setting]
	I1213 13:47:22.154092  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.155563  743793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 13:47:22.163308  743793 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 13:47:22.163339  743793 kubeadm.go:602] duration metric: took 17.955654ms to restartPrimaryControlPlane
	I1213 13:47:22.163350  743793 kubeadm.go:403] duration metric: took 74.090212ms to StartCluster
	I1213 13:47:22.163370  743793 settings.go:142] acquiring lock: {Name:mkb44193ba58b09d8615650747eaad19c43e1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.163433  743793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:47:22.164305  743793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-390571/kubeconfig: {Name:mke96882ff9199e558f67b9408c8f04265bde7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 13:47:22.164552  743793 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 13:47:22.164629  743793 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 13:47:22.164752  743793 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-362964"
	I1213 13:47:22.164768  743793 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-362964"
	W1213 13:47:22.164802  743793 addons.go:248] addon storage-provisioner should already be in state true
	I1213 13:47:22.164794  743793 addons.go:70] Setting dashboard=true in profile "newest-cni-362964"
	I1213 13:47:22.164827  743793 config.go:182] Loaded profile config "newest-cni-362964": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:47:22.164851  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.164849  743793 addons.go:70] Setting default-storageclass=true in profile "newest-cni-362964"
	I1213 13:47:22.164885  743793 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-362964"
	I1213 13:47:22.164831  743793 addons.go:239] Setting addon dashboard=true in "newest-cni-362964"
	W1213 13:47:22.164998  743793 addons.go:248] addon dashboard should already be in state true
	I1213 13:47:22.165031  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.165174  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165349  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.165507  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.166701  743793 out.go:179] * Verifying Kubernetes components...
	I1213 13:47:22.167981  743793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 13:47:22.192293  743793 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 13:47:22.192607  743793 addons.go:239] Setting addon default-storageclass=true in "newest-cni-362964"
	W1213 13:47:22.192634  743793 addons.go:248] addon default-storageclass should already be in state true
	I1213 13:47:22.192668  743793 host.go:66] Checking if "newest-cni-362964" exists ...
	I1213 13:47:22.193205  743793 cli_runner.go:164] Run: docker container inspect newest-cni-362964 --format={{.State.Status}}
	I1213 13:47:22.193489  743793 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 13:47:22.194545  743793 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 13:47:22.194630  743793 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.194650  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 13:47:22.194717  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.197270  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 13:47:22.197289  743793 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 13:47:22.197336  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.223581  743793 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.223609  743793 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 13:47:22.223725  743793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-362964
	I1213 13:47:22.235014  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.236249  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.249859  743793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/newest-cni-362964/id_rsa Username:docker}
	I1213 13:47:22.322026  743793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 13:47:22.335417  743793 api_server.go:52] waiting for apiserver process to appear ...
	I1213 13:47:22.335494  743793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:47:22.347746  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 13:47:22.347860  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 13:47:22.348903  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 13:47:22.348995  743793 api_server.go:72] duration metric: took 184.405092ms to wait for apiserver process to appear ...
	I1213 13:47:22.349020  743793 api_server.go:88] waiting for apiserver healthz status ...
	I1213 13:47:22.349038  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:22.357705  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 13:47:22.364407  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 13:47:22.364428  743793 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 13:47:22.378144  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 13:47:22.378163  743793 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 13:47:22.392593  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 13:47:22.392619  743793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 13:47:22.406406  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 13:47:22.406430  743793 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 13:47:22.420459  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 13:47:22.420499  743793 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 13:47:22.432910  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 13:47:22.432934  743793 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 13:47:22.444815  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 13:47:22.444841  743793 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 13:47:22.458380  743793 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:22.458404  743793 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 13:47:22.471198  743793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 13:47:23.763047  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.763087  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.763104  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.768125  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 13:47:23.768149  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 13:47:23.849412  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:23.855339  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:23.855368  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.312519  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.963582954s)
	I1213 13:47:24.312612  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.954874796s)
	I1213 13:47:24.312784  743793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.841538299s)
	I1213 13:47:24.314353  743793 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-362964 addons enable metrics-server
	
	I1213 13:47:24.323239  743793 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 13:47:24.324346  743793 addons.go:530] duration metric: took 2.159730307s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 13:47:24.349242  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.353865  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.353887  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:24.849405  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:24.854959  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 13:47:24.854986  743793 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 13:47:25.349483  743793 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 13:47:25.353727  743793 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 13:47:25.354695  743793 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 13:47:25.354720  743793 api_server.go:131] duration metric: took 3.00569336s to wait for apiserver health ...
	I1213 13:47:25.354729  743793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 13:47:25.358250  743793 system_pods.go:59] 8 kube-system pods found
	I1213 13:47:25.358279  743793 system_pods.go:61] "coredns-7d764666f9-rqktl" [7c70d7d0-5139-4893-905c-0e183495035e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358287  743793 system_pods.go:61] "etcd-newest-cni-362964" [49d03570-d59e-4e95-902f-1994733e6009] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 13:47:25.358295  743793 system_pods.go:61] "kindnet-qk8dn" [0df822e7-da1c-43ee-9a1e-b2131ae84e50] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 13:47:25.358303  743793 system_pods.go:61] "kube-apiserver-newest-cni-362964" [31c7799d-0188-4e2f-8d32-eb6e3ffe29ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 13:47:25.358315  743793 system_pods.go:61] "kube-controller-manager-newest-cni-362964" [cee82184-0e71-4dfb-8851-d642f2716578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 13:47:25.358325  743793 system_pods.go:61] "kube-proxy-97cpx" [c081628a-7cdd-4b8c-9d28-9d95707c6064] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 13:47:25.358338  743793 system_pods.go:61] "kube-scheduler-newest-cni-362964" [d160f41f-e904-4d11-9b2c-157bfcbc668f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 13:47:25.358346  743793 system_pods.go:61] "storage-provisioner" [b6d4689e-b3f1-496d-bfd4-11cb93ea7c15] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 13:47:25.358352  743793 system_pods.go:74] duration metric: took 3.617333ms to wait for pod list to return data ...
	I1213 13:47:25.358359  743793 default_sa.go:34] waiting for default service account to be created ...
	I1213 13:47:25.360577  743793 default_sa.go:45] found service account: "default"
	I1213 13:47:25.360597  743793 default_sa.go:55] duration metric: took 2.231432ms for default service account to be created ...
	I1213 13:47:25.360614  743793 kubeadm.go:587] duration metric: took 3.196023464s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 13:47:25.360633  743793 node_conditions.go:102] verifying NodePressure condition ...
	I1213 13:47:25.362709  743793 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 13:47:25.362732  743793 node_conditions.go:123] node cpu capacity is 8
	I1213 13:47:25.362750  743793 node_conditions.go:105] duration metric: took 2.111782ms to run NodePressure ...
	I1213 13:47:25.362764  743793 start.go:242] waiting for startup goroutines ...
	I1213 13:47:25.362789  743793 start.go:247] waiting for cluster config update ...
	I1213 13:47:25.362806  743793 start.go:256] writing updated cluster config ...
	I1213 13:47:25.363125  743793 ssh_runner.go:195] Run: rm -f paused
	I1213 13:47:25.410985  743793 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 13:47:25.412352  743793 out.go:179] * Done! kubectl is now configured to use "newest-cni-362964" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 13:46:50 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:46:50.863164282Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 13:46:50 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:46:50.867100804Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 13:46:50 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:46:50.867125176Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.98024766Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4610e88f-91b8-4dc9-bfd5-d71b1801702c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.981346573Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9537215a-a158-4e9a-aa82-87beb80a4c57 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.982599554Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper" id=43f94380-e3da-429d-b068-75c319a9ee33 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.982749307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.989174462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:06 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:06.989669059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.020888409Z" level=info msg="Created container 6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper" id=43f94380-e3da-429d-b068-75c319a9ee33 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.021503833Z" level=info msg="Starting container: 6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573" id=538585fe-32bc-49ec-8b44-764a4153501a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.023727608Z" level=info msg="Started container" PID=1784 containerID=6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper id=538585fe-32bc-49ec-8b44-764a4153501a name=/runtime.v1.RuntimeService/StartContainer sandboxID=349937d79c24c3bb7d19489d270c33ef31d471dc751ac83533a226980e443e27
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.11032268Z" level=info msg="Removing container: a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870" id=9451087c-3c18-4360-865a-4bcefb1946a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:47:07 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:07.121023525Z" level=info msg="Removed container a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp/dashboard-metrics-scraper" id=9451087c-3c18-4360-865a-4bcefb1946a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.121453372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=412dee1d-b6c8-46cc-8744-ea392cbfd157 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.122416899Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=34befef1-20f2-4e57-8344-aa13e0e56471 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.12353311Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b006d57f-6378-4c61-a9d0-bb80a7e342d4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.123663491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129079852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129264119Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/88b09ad1ac82ed5b9bf69f5ed50d34e0d61addc0711327d360ecfc36d1659494/merged/etc/passwd: no such file or directory"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129296629Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/88b09ad1ac82ed5b9bf69f5ed50d34e0d61addc0711327d360ecfc36d1659494/merged/etc/group: no such file or directory"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.129541779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.157466171Z" level=info msg="Created container 0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375: kube-system/storage-provisioner/storage-provisioner" id=b006d57f-6378-4c61-a9d0-bb80a7e342d4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.15811689Z" level=info msg="Starting container: 0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375" id=f700a082-6b66-4b1b-9ac1-808571e1bf58 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 13:47:11 default-k8s-diff-port-038239 crio[567]: time="2025-12-13T13:47:11.159919834Z" level=info msg="Started container" PID=1798 containerID=0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375 description=kube-system/storage-provisioner/storage-provisioner id=f700a082-6b66-4b1b-9ac1-808571e1bf58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d6b1eb91c70a83a8f6203ab9ddae87ab7fe7f0db76615db07127be13c4488a1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	0c83493337f40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   8d6b1eb91c70a       storage-provisioner                                    kube-system
	6ded9f03b4daa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   349937d79c24c       dashboard-metrics-scraper-6ffb444bf9-8v9dp             kubernetes-dashboard
	3204f1765ba08       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   cb79ef9143023       kubernetes-dashboard-855c9754f9-zlkps                  kubernetes-dashboard
	b7f6d2e12ce0e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   50a1820e62e13       busybox                                                default
	8405085ebe705       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   5fbd5ced2aea6       coredns-66bc5c9577-tzzmx                               kube-system
	5795013750c24       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   eb58de6f92ffc       kube-proxy-lzwfg                                       kube-system
	64ba07c204a50       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   8d6b1eb91c70a       storage-provisioner                                    kube-system
	ece6cabbfd4de       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   0db3f4ba8580b       kindnet-c65rs                                          kube-system
	e666bfd89f30f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   9ba8b9c325cd2       kube-apiserver-default-k8s-diff-port-038239            kube-system
	99cbb0e73d219       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   e27423a7cb39a       kube-scheduler-default-k8s-diff-port-038239            kube-system
	6d6fd6c98d01d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   d79667ee7685c       etcd-default-k8s-diff-port-038239                      kube-system
	334a9f2c1095a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   4f65e0aceb6fa       kube-controller-manager-default-k8s-diff-port-038239   kube-system
	
	
	==> coredns [8405085ebe705ba7b423d0c2b5d4883fb997fcd33bd7a956ea769773c6341e34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60624 - 60979 "HINFO IN 7938494452543560986.1464981115743233958. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.487865899s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-038239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-038239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=default-k8s-diff-port-038239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T13_45_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 13:45:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-038239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 13:47:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 13:47:10 +0000   Sat, 13 Dec 2025 13:45:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-038239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                411d424d-b720-4c82-b27f-51e7954655e7
	  Boot ID:                    3a031c38-2de5-4abf-9191-ca3cf8c37af1
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-tzzmx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-038239                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-c65rs                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-038239             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-038239    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-lzwfg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-038239             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8v9dp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zlkps                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node default-k8s-diff-port-038239 event: Registered Node default-k8s-diff-port-038239 in Controller
	  Normal  NodeReady                92s                kubelet          Node default-k8s-diff-port-038239 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-038239 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-038239 event: Registered Node default-k8s-diff-port-038239 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 d4 5a 35 c7 c3 08 06
	[  +0.021086] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +19.681588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 0c 97 18 9b e3 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 04 61 d2 c8 ed 08 06
	[Dec13 13:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[  +7.252347] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 ce fd 58 59 0f 08 06
	[  +0.000117] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	[  +1.567410] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 59 b8 80 29 4a 08 06
	[  +0.000370] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 3a 18 d2 d9 8b 08 06
	[ +13.814205] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 cb 6b 87 5d af 08 06
	[  +0.000318] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 8e 9c 2f 1d 25 08 06
	[Dec13 13:45] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 49 cc d7 b3 9c 08 06
	[  +0.000851] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe cc 55 7b a9 74 08 06
	
	
	==> etcd [6d6fd6c98d01d12e4674d5a2044ea8579a053244365f2f43c908c34dac570480] <==
	{"level":"warn","ts":"2025-12-13T13:46:38.888377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.893659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.902150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.912185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.921296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.931187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.939887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.947739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.955935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.965789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.974892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.988706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:38.998178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:39.010248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T13:46:39.070542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56256","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T13:46:43.311187Z","caller":"traceutil/trace.go:172","msg":"trace[71490223] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"148.139496ms","start":"2025-12-13T13:46:43.163019Z","end":"2025-12-13T13:46:43.311158Z","steps":["trace[71490223] 'process raft request'  (duration: 145.504639ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.311339Z","caller":"traceutil/trace.go:172","msg":"trace[1273882751] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"148.227558ms","start":"2025-12-13T13:46:43.163090Z","end":"2025-12-13T13:46:43.311318Z","steps":["trace[1273882751] 'process raft request'  (duration: 148.010391ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.604576Z","caller":"traceutil/trace.go:172","msg":"trace[1210521705] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"141.354245ms","start":"2025-12-13T13:46:43.463200Z","end":"2025-12-13T13:46:43.604554Z","steps":["trace[1210521705] 'process raft request'  (duration: 137.365038ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.730152Z","caller":"traceutil/trace.go:172","msg":"trace[1699595448] linearizableReadLoop","detail":"{readStateIndex:513; appliedIndex:513; }","duration":"118.933405ms","start":"2025-12-13T13:46:43.611192Z","end":"2025-12-13T13:46:43.730125Z","steps":["trace[1699595448] 'read index received'  (duration: 118.924588ms)","trace[1699595448] 'applied index is now lower than readState.Index'  (duration: 7.71µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T13:46:43.759024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.805775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2025-12-13T13:46:43.759119Z","caller":"traceutil/trace.go:172","msg":"trace[1970472961] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:482; }","duration":"147.922959ms","start":"2025-12-13T13:46:43.611181Z","end":"2025-12-13T13:46:43.759104Z","steps":["trace[1970472961] 'agreement among raft nodes before linearized reading'  (duration: 119.041307ms)","trace[1970472961] 'range keys from in-memory index tree'  (duration: 28.664266ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:46:43.759106Z","caller":"traceutil/trace.go:172","msg":"trace[387163409] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"149.556472ms","start":"2025-12-13T13:46:43.609519Z","end":"2025-12-13T13:46:43.759076Z","steps":["trace[387163409] 'process raft request'  (duration: 120.698651ms)","trace[387163409] 'compare'  (duration: 28.725338ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T13:46:43.778897Z","caller":"traceutil/trace.go:172","msg":"trace[1783498315] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"168.523071ms","start":"2025-12-13T13:46:43.610357Z","end":"2025-12-13T13:46:43.778880Z","steps":["trace[1783498315] 'process raft request'  (duration: 168.305286ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.778916Z","caller":"traceutil/trace.go:172","msg":"trace[52909227] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"168.397125ms","start":"2025-12-13T13:46:43.610496Z","end":"2025-12-13T13:46:43.778893Z","steps":["trace[52909227] 'process raft request'  (duration: 168.261769ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T13:46:43.778901Z","caller":"traceutil/trace.go:172","msg":"trace[1476993380] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"166.535644ms","start":"2025-12-13T13:46:43.612346Z","end":"2025-12-13T13:46:43.778882Z","steps":["trace[1476993380] 'process raft request'  (duration: 166.482687ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:47:31 up  2:29,  0 user,  load average: 4.14, 4.11, 2.73
	Linux default-k8s-diff-port-038239 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ece6cabbfd4de4c0bf871f5e57ce5c4769621be9c162b8537aebfca43ac97e90] <==
	I1213 13:46:40.590597       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 13:46:40.590982       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 13:46:40.591241       1 main.go:148] setting mtu 1500 for CNI 
	I1213 13:46:40.591266       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 13:46:40.591279       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T13:46:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 13:46:40.807925       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 13:46:40.889848       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 13:46:40.890985       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 13:46:40.891141       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 13:46:41.291067       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 13:46:41.291093       1 metrics.go:72] Registering metrics
	I1213 13:46:41.291144       1 controller.go:711] "Syncing nftables rules"
	I1213 13:46:50.807737       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:46:50.807823       1 main.go:301] handling current node
	I1213 13:47:00.808428       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:00.808477       1 main.go:301] handling current node
	I1213 13:47:10.808641       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:10.808675       1 main.go:301] handling current node
	I1213 13:47:20.811883       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:20.811913       1 main.go:301] handling current node
	I1213 13:47:30.816870       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 13:47:30.816902       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e666bfd89f30f85cd8c1e8c64c04b77df4cb27f6c7df7838bdfaf6bf54d5ab00] <==
	I1213 13:46:39.682271       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 13:46:39.682299       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 13:46:39.683770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 13:46:39.683801       1 aggregator.go:171] initial CRD sync complete...
	I1213 13:46:39.683810       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 13:46:39.683824       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 13:46:39.683830       1 cache.go:39] Caches are synced for autoregister controller
	I1213 13:46:39.682286       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 13:46:39.688395       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 13:46:39.691833       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 13:46:39.693912       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 13:46:39.693973       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 13:46:39.717604       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 13:46:40.004174       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 13:46:40.005335       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 13:46:40.046321       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 13:46:40.071680       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 13:46:40.080126       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 13:46:40.122296       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.236.104"}
	I1213 13:46:40.132695       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.88.246"}
	I1213 13:46:40.575666       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 13:46:43.162468       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:43.162517       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 13:46:43.362181       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 13:46:43.462699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [334a9f2c1095a76f324f17afb6dae5685e1e8043861620865467bb49011fd8ea] <==
	I1213 13:46:42.884565       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 13:46:42.885830       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 13:46:42.888064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 13:46:42.890305       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 13:46:42.908161       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 13:46:42.909413       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 13:46:42.909434       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 13:46:42.909472       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 13:46:42.909804       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 13:46:42.911054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 13:46:42.912250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 13:46:42.914392       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 13:46:42.917670       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 13:46:42.917819       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 13:46:42.917859       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 13:46:42.917869       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 13:46:42.917875       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 13:46:42.918004       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 13:46:42.922940       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 13:46:42.924718       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 13:46:43.069456       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 13:46:43.108240       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 13:46:43.108258       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 13:46:43.108263       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 13:46:43.169735       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5795013750c24c2283403e4012508ffdb318fa62ff8d0f01376a3b277bfc99f8] <==
	I1213 13:46:40.377002       1 server_linux.go:53] "Using iptables proxy"
	I1213 13:46:40.479637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 13:46:40.580029       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 13:46:40.580079       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 13:46:40.580185       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 13:46:40.605924       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 13:46:40.605996       1 server_linux.go:132] "Using iptables Proxier"
	I1213 13:46:40.612404       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 13:46:40.613079       1 server.go:527] "Version info" version="v1.34.2"
	I1213 13:46:40.613129       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:40.615001       1 config.go:200] "Starting service config controller"
	I1213 13:46:40.615019       1 config.go:106] "Starting endpoint slice config controller"
	I1213 13:46:40.615031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 13:46:40.615031       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 13:46:40.615114       1 config.go:309] "Starting node config controller"
	I1213 13:46:40.615122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 13:46:40.615131       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 13:46:40.615005       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 13:46:40.615750       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 13:46:40.715253       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 13:46:40.715265       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 13:46:40.715984       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [99cbb0e73d2197ad662dba2a00e0ec2f3ce53cd9276e552c1ca3a62cac601105] <==
	I1213 13:46:39.032275       1 serving.go:386] Generated self-signed cert in-memory
	I1213 13:46:39.661885       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 13:46:39.662011       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 13:46:39.668575       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 13:46:39.668616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:39.668650       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:39.668625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:46:39.668719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 13:46:39.669044       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 13:46:39.668648       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 13:46:39.669306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 13:46:39.769085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 13:46:39.769159       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 13:46:39.769086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 13 13:46:42 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:42.817020     734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.928974     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1875c354-6bf3-4786-b35a-cac99170722a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zlkps\" (UID: \"1875c354-6bf3-4786-b35a-cac99170722a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlkps"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.929054     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsmp\" (UniqueName: \"kubernetes.io/projected/988b8622-06ce-4f75-97da-b7867be34de6-kube-api-access-xqsmp\") pod \"dashboard-metrics-scraper-6ffb444bf9-8v9dp\" (UID: \"988b8622-06ce-4f75-97da-b7867be34de6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.929189     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tvs\" (UniqueName: \"kubernetes.io/projected/1875c354-6bf3-4786-b35a-cac99170722a-kube-api-access-x8tvs\") pod \"kubernetes-dashboard-855c9754f9-zlkps\" (UID: \"1875c354-6bf3-4786-b35a-cac99170722a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlkps"
	Dec 13 13:46:43 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:43.929258     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/988b8622-06ce-4f75-97da-b7867be34de6-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8v9dp\" (UID: \"988b8622-06ce-4f75-97da-b7867be34de6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp"
	Dec 13 13:46:47 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:47.048251     734 scope.go:117] "RemoveContainer" containerID="6995a475a06e84b21528339a5da0d0945da4371c01d87060f6ae9d2a160e16ce"
	Dec 13 13:46:48 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:48.053638     734 scope.go:117] "RemoveContainer" containerID="6995a475a06e84b21528339a5da0d0945da4371c01d87060f6ae9d2a160e16ce"
	Dec 13 13:46:48 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:48.054022     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:46:48 default-k8s-diff-port-038239 kubelet[734]: E1213 13:46:48.054210     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:46:49 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:49.058689     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:46:49 default-k8s-diff-port-038239 kubelet[734]: E1213 13:46:49.058903     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:46:50 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:50.073237     734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlkps" podStartSLOduration=1.531378751 podStartE2EDuration="7.073211577s" podCreationTimestamp="2025-12-13 13:46:43 +0000 UTC" firstStartedPulling="2025-12-13 13:46:44.149770519 +0000 UTC m=+7.265661114" lastFinishedPulling="2025-12-13 13:46:49.691603333 +0000 UTC m=+12.807493940" observedRunningTime="2025-12-13 13:46:50.072878747 +0000 UTC m=+13.188769363" watchObservedRunningTime="2025-12-13 13:46:50.073211577 +0000 UTC m=+13.189102194"
	Dec 13 13:46:55 default-k8s-diff-port-038239 kubelet[734]: I1213 13:46:55.413466     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:46:55 default-k8s-diff-port-038239 kubelet[734]: E1213 13:46:55.413723     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:47:06 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:06.979761     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:47:07 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:07.108542     734 scope.go:117] "RemoveContainer" containerID="a1ff92e19c1c102ff902b879ae9b96bd8e1b42102b397dd0d3e000d063ec4870"
	Dec 13 13:47:07 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:07.108891     734 scope.go:117] "RemoveContainer" containerID="6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	Dec 13 13:47:07 default-k8s-diff-port-038239 kubelet[734]: E1213 13:47:07.109139     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:47:11 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:11.121055     734 scope.go:117] "RemoveContainer" containerID="64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1"
	Dec 13 13:47:15 default-k8s-diff-port-038239 kubelet[734]: I1213 13:47:15.413418     734 scope.go:117] "RemoveContainer" containerID="6ded9f03b4daa14802c47d962fca913f09fb2e6a6f9427e0a4c0c99b83f2a573"
	Dec 13 13:47:15 default-k8s-diff-port-038239 kubelet[734]: E1213 13:47:15.413598     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8v9dp_kubernetes-dashboard(988b8622-06ce-4f75-97da-b7867be34de6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8v9dp" podUID="988b8622-06ce-4f75-97da-b7867be34de6"
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 13:47:26 default-k8s-diff-port-038239 systemd[1]: kubelet.service: Consumed 1.643s CPU time.
	
	
	==> kubernetes-dashboard [3204f1765ba08dd53b15129320ee6b079bc92ca458fd51509e14ccc8640a8ccc] <==
	2025/12/13 13:46:49 Starting overwatch
	2025/12/13 13:46:49 Using namespace: kubernetes-dashboard
	2025/12/13 13:46:49 Using in-cluster config to connect to apiserver
	2025/12/13 13:46:49 Using secret token for csrf signing
	2025/12/13 13:46:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 13:46:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 13:46:49 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 13:46:49 Generating JWE encryption key
	2025/12/13 13:46:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 13:46:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 13:46:49 Initializing JWE encryption key from synchronized object
	2025/12/13 13:46:49 Creating in-cluster Sidecar client
	2025/12/13 13:46:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 13:46:49 Serving insecurely on HTTP port: 9090
	2025/12/13 13:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0c83493337f4062fa6f81e8d01db7498fef9b6315d8fe35541f7e23b56f0a375] <==
	I1213 13:47:11.171813       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 13:47:11.179223       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 13:47:11.179267       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 13:47:11.181264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:14.636845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:18.897077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:22.495877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:25.550178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:28.573344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:28.578385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:28.578629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 13:47:28.578818       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"164d55c5-fd3d-4e0d-b772-31680a1bef78", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-038239_c7db4044-176f-475f-89f1-6cbf9a73e0e5 became leader
	I1213 13:47:28.578869       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038239_c7db4044-176f-475f-89f1-6cbf9a73e0e5!
	W1213 13:47:28.581080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:28.584973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 13:47:28.679105       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-038239_c7db4044-176f-475f-89f1-6cbf9a73e0e5!
	W1213 13:47:30.588380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 13:47:30.594673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [64ba07c204a50056a0dfebc7954692a6ef002bb0ddac55dae71b35ceda35cfd1] <==
	I1213 13:46:40.342758       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 13:47:10.347408       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239: exit status 2 (343.444432ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.25s)

                                                
                                    

Test pass (352/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.4
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 2.91
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.2
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.82
31 TestOffline 83.26
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 123.88
40 TestAddons/serial/GCPAuth/Namespaces 2.26
41 TestAddons/serial/GCPAuth/FakeCredentials 7.42
57 TestAddons/StoppedEnableDisable 18.58
58 TestCertOptions 25.37
59 TestCertExpiration 220.16
61 TestForceSystemdFlag 24.91
62 TestForceSystemdEnv 32.23
67 TestErrorSpam/setup 18.7
68 TestErrorSpam/start 0.66
69 TestErrorSpam/status 0.92
70 TestErrorSpam/pause 6.95
71 TestErrorSpam/unpause 5.49
72 TestErrorSpam/stop 18.1
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 68.6
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.35
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
84 TestFunctional/serial/CacheCmd/cache/add_local 1.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 47.3
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.19
95 TestFunctional/serial/LogsFileCmd 1.21
96 TestFunctional/serial/InvalidService 3.96
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 7.29
100 TestFunctional/parallel/DryRun 0.38
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 0.97
106 TestFunctional/parallel/ServiceCmdConnect 7.55
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 19.13
110 TestFunctional/parallel/SSHCmd 0.57
111 TestFunctional/parallel/CpCmd 1.86
112 TestFunctional/parallel/MySQL 24.6
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 1.93
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
122 TestFunctional/parallel/License 0.46
123 TestFunctional/parallel/Version/short 0.09
124 TestFunctional/parallel/Version/components 0.66
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
128 TestFunctional/parallel/ImageCommands/ImageListYaml 2
129 TestFunctional/parallel/ImageCommands/ImageBuild 5.26
130 TestFunctional/parallel/ImageCommands/Setup 0.99
131 TestFunctional/parallel/ServiceCmd/DeployApp 9.16
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.23
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.48
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.57
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
144 TestFunctional/parallel/ServiceCmd/List 0.49
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
154 TestFunctional/parallel/ServiceCmd/Format 0.38
155 TestFunctional/parallel/ServiceCmd/URL 0.36
156 TestFunctional/parallel/ProfileCmd/profile_list 0.42
157 TestFunctional/parallel/MountCmd/any-port 10.2
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
162 TestFunctional/parallel/MountCmd/specific-port 2.21
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 35.24
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.02
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.54
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.21
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.52
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 47.14
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.18
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.84
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 6.1
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.38
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.02
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.67
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 20.21
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.69
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.7
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 44.73
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.67
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.64
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.44
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.09
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.6
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.37
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.26
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.28
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.27
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.39
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.19
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.92
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 17.26
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 1.34
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.6
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.13
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.89
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.9
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.54
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.55
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.57
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 5.78
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.18
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.23
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.94
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 140.03
266 TestMultiControlPlane/serial/DeployApp 4
267 TestMultiControlPlane/serial/PingHostFromPods 1.03
268 TestMultiControlPlane/serial/AddWorkerNode 24.02
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
271 TestMultiControlPlane/serial/CopyFile 16.96
272 TestMultiControlPlane/serial/StopSecondaryNode 14.16
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.53
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 114.98
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.49
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
279 TestMultiControlPlane/serial/StopCluster 41.4
280 TestMultiControlPlane/serial/RestartCluster 53.25
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
282 TestMultiControlPlane/serial/AddSecondaryNode 37.83
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
288 TestJSONOutput/start/Command 40.49
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.99
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.22
313 TestKicCustomNetwork/create_custom_network 25.43
314 TestKicCustomNetwork/use_default_bridge_network 22.36
315 TestKicExistingNetwork 25.98
316 TestKicCustomSubnet 26.22
317 TestKicStaticIP 23.49
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 44.5
322 TestMountStart/serial/StartWithMountFirst 7.65
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 7.81
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.65
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.25
329 TestMountStart/serial/RestartStopped 7.29
330 TestMountStart/serial/VerifyMountPostStop 0.27
333 TestMultiNode/serial/FreshStart2Nodes 58.59
334 TestMultiNode/serial/DeployApp2Nodes 3.25
335 TestMultiNode/serial/PingHostFrom2Pods 0.72
336 TestMultiNode/serial/AddNode 26.01
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.65
339 TestMultiNode/serial/CopyFile 9.66
340 TestMultiNode/serial/StopNode 2.23
341 TestMultiNode/serial/StartAfterStop 7.13
342 TestMultiNode/serial/RestartKeepsNodes 81.13
343 TestMultiNode/serial/DeleteNode 5.21
344 TestMultiNode/serial/StopMultiNode 28.52
345 TestMultiNode/serial/RestartMultiNode 43.88
346 TestMultiNode/serial/ValidateNameConflict 25.3
351 TestPreload 82.27
353 TestScheduledStopUnix 97.38
356 TestInsufficientStorage 8.67
357 TestRunningBinaryUpgrade 44.03
359 TestKubernetesUpgrade 298.64
360 TestMissingContainerUpgrade 63.05
362 TestPause/serial/Start 81.24
363 TestStoppedBinaryUpgrade/Setup 0.58
364 TestStoppedBinaryUpgrade/Upgrade 312.1
365 TestPause/serial/SecondStartNoReconfiguration 7.32
368 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
369 TestNoKubernetes/serial/StartWithK8s 19.66
370 TestNoKubernetes/serial/StartWithStopK8s 15.9
371 TestNoKubernetes/serial/Start 6.75
372 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
373 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
374 TestNoKubernetes/serial/ProfileList 36.32
375 TestNoKubernetes/serial/Stop 1.5
376 TestNoKubernetes/serial/StartNoArgs 6.56
377 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
385 TestNetworkPlugins/group/false 3.85
396 TestNetworkPlugins/group/auto/Start 43.35
397 TestNetworkPlugins/group/kindnet/Start 43.79
398 TestNetworkPlugins/group/auto/KubeletFlags 0.29
399 TestNetworkPlugins/group/auto/NetCatPod 7.18
400 TestNetworkPlugins/group/auto/DNS 0.11
401 TestNetworkPlugins/group/auto/Localhost 0.09
402 TestNetworkPlugins/group/auto/HairPin 0.09
403 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
404 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
405 TestNetworkPlugins/group/kindnet/NetCatPod 8.18
406 TestNetworkPlugins/group/kindnet/DNS 0.14
407 TestNetworkPlugins/group/kindnet/Localhost 0.1
408 TestNetworkPlugins/group/kindnet/HairPin 0.1
409 TestNetworkPlugins/group/calico/Start 45.26
410 TestNetworkPlugins/group/custom-flannel/Start 63.03
411 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
412 TestNetworkPlugins/group/enable-default-cni/Start 64.26
413 TestNetworkPlugins/group/calico/ControllerPod 6.01
414 TestNetworkPlugins/group/calico/KubeletFlags 0.36
415 TestNetworkPlugins/group/calico/NetCatPod 8.2
416 TestNetworkPlugins/group/calico/DNS 0.12
417 TestNetworkPlugins/group/calico/Localhost 0.1
418 TestNetworkPlugins/group/calico/HairPin 0.1
419 TestNetworkPlugins/group/flannel/Start 45.08
420 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
421 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
422 TestNetworkPlugins/group/bridge/Start 69.53
423 TestNetworkPlugins/group/custom-flannel/DNS 0.13
424 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
425 TestNetworkPlugins/group/custom-flannel/HairPin 0.08
426 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
427 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.23
428 TestNetworkPlugins/group/flannel/ControllerPod 6.01
430 TestStartStop/group/old-k8s-version/serial/FirstStart 48.8
431 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
432 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
433 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
434 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
435 TestNetworkPlugins/group/flannel/NetCatPod 12.19
436 TestNetworkPlugins/group/flannel/DNS 0.14
437 TestNetworkPlugins/group/flannel/Localhost 0.12
438 TestNetworkPlugins/group/flannel/HairPin 0.12
440 TestStartStop/group/no-preload/serial/FirstStart 45.81
442 TestStartStop/group/embed-certs/serial/FirstStart 40.97
443 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
444 TestNetworkPlugins/group/bridge/NetCatPod 10.22
445 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
446 TestNetworkPlugins/group/bridge/DNS 0.13
447 TestNetworkPlugins/group/bridge/Localhost 0.11
448 TestNetworkPlugins/group/bridge/HairPin 0.09
450 TestStartStop/group/old-k8s-version/serial/Stop 16.26
451 TestStartStop/group/no-preload/serial/DeployApp 8.58
453 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.46
455 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
456 TestStartStop/group/old-k8s-version/serial/SecondStart 50.15
457 TestStartStop/group/no-preload/serial/Stop 16.33
458 TestStartStop/group/embed-certs/serial/DeployApp 7.26
460 TestStartStop/group/embed-certs/serial/Stop 18.64
461 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
462 TestStartStop/group/no-preload/serial/SecondStart 52.61
463 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
464 TestStartStop/group/embed-certs/serial/SecondStart 45.44
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
467 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.76
468 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
469 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
470 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
472 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
473 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.74
475 TestStartStop/group/newest-cni/serial/FirstStart 26.31
476 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
477 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
478 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
479 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
481 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
482 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
484 TestStartStop/group/newest-cni/serial/DeployApp 0
486 TestStartStop/group/newest-cni/serial/Stop 7.97
487 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
488 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
489 TestStartStop/group/newest-cni/serial/SecondStart 10.41
490 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
491 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
492 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
493 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
495 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (4.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-292122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-292122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.398395168s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 13:04:35.529278  394130 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 13:04:35.529393  394130 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-292122
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-292122: exit status 85 (72.918872ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-292122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-292122 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:31.189313  394142 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:31.189553  394142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:31.189561  394142 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:31.189565  394142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:31.189805  394142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	W1213 13:04:31.189943  394142 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22122-390571/.minikube/config/config.json: open /home/jenkins/minikube-integration/22122-390571/.minikube/config/config.json: no such file or directory
	I1213 13:04:31.190434  394142 out.go:368] Setting JSON to true
	I1213 13:04:31.191349  394142 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6419,"bootTime":1765624652,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:31.191410  394142 start.go:143] virtualization: kvm guest
	I1213 13:04:31.194205  394142 out.go:99] [download-only-292122] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1213 13:04:31.194361  394142 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 13:04:31.194424  394142 notify.go:221] Checking for updates...
	I1213 13:04:31.195569  394142 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:31.196692  394142 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:31.197835  394142 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:04:31.199182  394142 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:04:31.200321  394142 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:04:31.202310  394142 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:04:31.202636  394142 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:31.229075  394142 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:31.229217  394142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:31.285114  394142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-13 13:04:31.275845107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:31.285253  394142 docker.go:319] overlay module found
	I1213 13:04:31.286719  394142 out.go:99] Using the docker driver based on user configuration
	I1213 13:04:31.286755  394142 start.go:309] selected driver: docker
	I1213 13:04:31.286761  394142 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:31.286898  394142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:31.341863  394142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-13 13:04:31.332371099 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:31.342023  394142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:31.342540  394142 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:04:31.342700  394142 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:04:31.344304  394142 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-292122 host does not exist
	  To start a cluster, run: "minikube start -p download-only-292122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-292122
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-703172 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-703172 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.913533084s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 13:04:38.874683  394130 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 13:04:38.874717  394130 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-703172
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-703172: exit status 85 (72.586082ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-292122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-292122 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-292122                                                                                                                                                   │ download-only-292122 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-703172 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-703172 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:36.013362  394500 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:36.013652  394500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:36.013664  394500 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:36.013668  394500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:36.013926  394500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:04:36.014456  394500 out.go:368] Setting JSON to true
	I1213 13:04:36.015361  394500 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6424,"bootTime":1765624652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:36.015415  394500 start.go:143] virtualization: kvm guest
	I1213 13:04:36.017379  394500 out.go:99] [download-only-703172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:36.017572  394500 notify.go:221] Checking for updates...
	I1213 13:04:36.018630  394500 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:36.019739  394500 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:36.020830  394500 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:04:36.021903  394500 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:04:36.022835  394500 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:04:36.024595  394500 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:04:36.024865  394500 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:36.047086  394500 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:36.047185  394500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:36.100433  394500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-13 13:04:36.091350033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:36.100540  394500 docker.go:319] overlay module found
	I1213 13:04:36.102097  394500 out.go:99] Using the docker driver based on user configuration
	I1213 13:04:36.102136  394500 start.go:309] selected driver: docker
	I1213 13:04:36.102149  394500 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:36.102230  394500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:36.154528  394500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-13 13:04:36.145628163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:36.154755  394500 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:36.155361  394500 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:04:36.155533  394500 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:04:36.157089  394500 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-703172 host does not exist
	  To start a cluster, run: "minikube start -p download-only-703172"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-703172
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-519964 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-519964 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.203017288s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 13:04:42.514035  394130 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 13:04:42.514077  394130 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-519964
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-519964: exit status 85 (71.505669ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-292122 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-292122 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-292122                                                                                                                                                          │ download-only-292122 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-703172 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-703172 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ delete  │ -p download-only-703172                                                                                                                                                          │ download-only-703172 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │ 13 Dec 25 13:04 UTC │
	│ start   │ -o=json --download-only -p download-only-519964 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-519964 │ jenkins │ v1.37.0 │ 13 Dec 25 13:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 13:04:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 13:04:39.363888  394862 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:04:39.364183  394862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:39.364192  394862 out.go:374] Setting ErrFile to fd 2...
	I1213 13:04:39.364197  394862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:04:39.364376  394862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:04:39.364846  394862 out.go:368] Setting JSON to true
	I1213 13:04:39.365827  394862 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6427,"bootTime":1765624652,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:04:39.365877  394862 start.go:143] virtualization: kvm guest
	I1213 13:04:39.367491  394862 out.go:99] [download-only-519964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:04:39.367661  394862 notify.go:221] Checking for updates...
	I1213 13:04:39.368659  394862 out.go:171] MINIKUBE_LOCATION=22122
	I1213 13:04:39.369956  394862 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:04:39.371217  394862 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:04:39.372153  394862 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:04:39.373118  394862 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 13:04:39.375000  394862 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 13:04:39.375245  394862 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:04:39.397531  394862 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:04:39.397664  394862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:39.450924  394862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-13 13:04:39.441666233 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:39.451026  394862 docker.go:319] overlay module found
	I1213 13:04:39.452464  394862 out.go:99] Using the docker driver based on user configuration
	I1213 13:04:39.452496  394862 start.go:309] selected driver: docker
	I1213 13:04:39.452505  394862 start.go:927] validating driver "docker" against <nil>
	I1213 13:04:39.452578  394862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:04:39.507106  394862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-13 13:04:39.498307706 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:04:39.507263  394862 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 13:04:39.507744  394862 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 13:04:39.507929  394862 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 13:04:39.509403  394862 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-519964 host does not exist
	  To start a cluster, run: "minikube start -p download-only-519964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-519964
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-752677 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-752677" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-752677
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 13:04:43.764147  394130 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-589486 --alsologtostderr --binary-mirror http://127.0.0.1:44049 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-589486" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-589486
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (83.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-444562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-444562 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.80385242s)
helpers_test.go:176: Cleaning up "offline-crio-444562" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-444562
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-444562: (2.457706262s)
--- PASS: TestOffline (83.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-802674
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-802674: exit status 85 (65.845903ms)

                                                
                                                
-- stdout --
	* Profile "addons-802674" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-802674"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-802674
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-802674: exit status 85 (69.417994ms)

                                                
                                                
-- stdout --
	* Profile "addons-802674" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-802674"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (123.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-802674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-802674 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.883476239s)
--- PASS: TestAddons/Setup (123.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-802674 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-802674 get secret gcp-auth -n new-namespace
addons_test.go:646: (dbg) Non-zero exit: kubectl --context addons-802674 get secret gcp-auth -n new-namespace: exit status 1 (137.321329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:638: (dbg) Run:  kubectl --context addons-802674 logs -l app=gcp-auth -n gcp-auth
I1213 13:06:49.584541  394130 retry.go:31] will retry after 1.756520631s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/12/13 13:06:44 GCP Auth Webhook started!
	2025/12/13 13:06:49 Ready to marshal response ...
	2025/12/13 13:06:49 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:646: (dbg) Run:  kubectl --context addons-802674 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-802674 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-802674 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [29062754-a680-492d-be0c-824bf09da2ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [29062754-a680-492d-be0c-824bf09da2ed] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.004166249s
addons_test.go:696: (dbg) Run:  kubectl --context addons-802674 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-802674 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-802674 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-802674
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-802674: (18.291483133s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-802674
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-802674
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-802674
--- PASS: TestAddons/StoppedEnableDisable (18.58s)

                                                
                                    
x
+
TestCertOptions (25.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-586947 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-586947 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.060710505s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-586947 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-586947 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-586947 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-586947" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-586947
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-586947: (2.505189146s)
--- PASS: TestCertOptions (25.37s)

                                                
                                    
x
+
TestCertExpiration (220.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-541985 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-541985 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (32.854522724s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-541985 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-541985 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (4.934350226s)
helpers_test.go:176: Cleaning up "cert-expiration-541985" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-541985
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-541985: (2.371094355s)
--- PASS: TestCertExpiration (220.16s)

                                                
                                    
x
+
TestForceSystemdFlag (24.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-212830 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-212830 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.206054689s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-212830 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-212830" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-212830
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-212830: (2.428933669s)
--- PASS: TestForceSystemdFlag (24.91s)

                                                
                                    
x
+
TestForceSystemdEnv (32.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-488734 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-488734 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.499940972s)
helpers_test.go:176: Cleaning up "force-systemd-env-488734" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-488734
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-488734: (2.724260425s)
--- PASS: TestForceSystemdEnv (32.23s)

                                                
                                    
x
+
TestErrorSpam/setup (18.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-177765 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-177765 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-177765 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-177765 --driver=docker  --container-runtime=crio: (18.70090801s)
--- PASS: TestErrorSpam/setup (18.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (6.95s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause: exit status 80 (2.087136886s)

                                                
                                                
-- stdout --
	* Pausing node nospam-177765 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:10:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause: exit status 80 (2.418265946s)

                                                
                                                
-- stdout --
	* Pausing node nospam-177765 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:10:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause: exit status 80 (2.439009443s)

                                                
                                                
-- stdout --
	* Pausing node nospam-177765 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:10:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.95s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause: exit status 80 (1.419010207s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-177765 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:10:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause: exit status 80 (2.079099231s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-177765 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:10:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause: exit status 80 (1.987689528s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-177765 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T13:10:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.49s)

                                                
                                    
x
+
TestErrorSpam/stop (18.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 stop: (17.895944667s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-177765 --log_dir /tmp/nospam-177765 stop
--- PASS: TestErrorSpam/stop (18.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/test/nested/copy/394130/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-018090 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1213 13:11:51.579866  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:51.586428  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:51.597805  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:51.619149  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:51.660490  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:51.741871  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:51.903380  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:52.225075  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:52.867075  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:54.148433  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:11:56.709798  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:12:01.831771  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-018090 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.600710627s)
--- PASS: TestFunctional/serial/StartWithProxy (68.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 13:12:02.809796  394130 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-018090 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-018090 --alsologtostderr -v=8: (6.347185881s)
functional_test.go:678: soft start took 6.347979699s for "functional-018090" cluster.
I1213 13:12:09.157362  394130 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-018090 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cache add registry.k8s.io/pause:latest
E1213 13:12:12.073912  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 cache add registry.k8s.io/pause:latest: (1.533895542s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-018090 /tmp/TestFunctionalserialCacheCmdcacheadd_local4148166920/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cache add minikube-local-cache-test:functional-018090
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cache delete minikube-local-cache-test:functional-018090
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-018090
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.390695ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 kubectl -- --context functional-018090 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-018090 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-018090 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 13:12:32.555991  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-018090 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.295797458s)
functional_test.go:776: restart took 47.295926183s for "functional-018090" cluster.
I1213 13:13:03.678829  394130 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-018090 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 logs: (1.19207245s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 logs --file /tmp/TestFunctionalserialLogsFileCmd1874637670/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 logs --file /tmp/TestFunctionalserialLogsFileCmd1874637670/001/logs.txt: (1.208306341s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-018090 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-018090
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-018090: exit status 115 (352.06969ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31964 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-018090 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 config get cpus: exit status 14 (85.15315ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 config get cpus: exit status 14 (88.885406ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-018090 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-018090 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 430818: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-018090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-018090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (169.628444ms)

                                                
                                                
-- stdout --
	* [functional-018090] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:13:24.969632  431293 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:13:24.969916  431293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:13:24.969926  431293 out.go:374] Setting ErrFile to fd 2...
	I1213 13:13:24.969930  431293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:13:24.970099  431293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:13:24.970513  431293 out.go:368] Setting JSON to false
	I1213 13:13:24.971473  431293 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6953,"bootTime":1765624652,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:13:24.971532  431293 start.go:143] virtualization: kvm guest
	I1213 13:13:24.973468  431293 out.go:179] * [functional-018090] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:13:24.974714  431293 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:13:24.974763  431293 notify.go:221] Checking for updates...
	I1213 13:13:24.977042  431293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:13:24.978256  431293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:13:24.979846  431293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:13:24.980999  431293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:13:24.982236  431293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:13:24.983711  431293 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:13:24.984326  431293 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:13:25.009170  431293 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:13:25.009258  431293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:13:25.063131  431293 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:13:25.052717337 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:13:25.063245  431293 docker.go:319] overlay module found
	I1213 13:13:25.064869  431293 out.go:179] * Using the docker driver based on existing profile
	I1213 13:13:25.065972  431293 start.go:309] selected driver: docker
	I1213 13:13:25.065985  431293 start.go:927] validating driver "docker" against &{Name:functional-018090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-018090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:13:25.066060  431293 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:13:25.067713  431293 out.go:203] 
	W1213 13:13:25.068750  431293 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 13:13:25.069732  431293 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-018090 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-018090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-018090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.912332ms)

                                                
                                                
-- stdout --
	* [functional-018090] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:13:25.345987  431527 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:13:25.346069  431527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:13:25.346077  431527 out.go:374] Setting ErrFile to fd 2...
	I1213 13:13:25.346081  431527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:13:25.346364  431527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:13:25.346750  431527 out.go:368] Setting JSON to false
	I1213 13:13:25.347707  431527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6953,"bootTime":1765624652,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:13:25.347761  431527 start.go:143] virtualization: kvm guest
	I1213 13:13:25.349602  431527 out.go:179] * [functional-018090] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 13:13:25.350952  431527 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:13:25.350958  431527 notify.go:221] Checking for updates...
	I1213 13:13:25.353397  431527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:13:25.354650  431527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:13:25.355746  431527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:13:25.360348  431527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:13:25.361543  431527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:13:25.363226  431527 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:13:25.363790  431527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:13:25.386185  431527 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:13:25.386280  431527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:13:25.441611  431527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:13:25.431784157 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:13:25.441718  431527 docker.go:319] overlay module found
	I1213 13:13:25.443377  431527 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 13:13:25.444649  431527 start.go:309] selected driver: docker
	I1213 13:13:25.444664  431527 start.go:927] validating driver "docker" against &{Name:functional-018090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-018090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:13:25.444761  431527 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:13:25.446435  431527 out.go:203] 
	W1213 13:13:25.447577  431527 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 13:13:25.448716  431527 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-018090 create deployment hello-node-connect --image kicbase/echo-server
I1213 13:13:17.364085  394130 detect.go:223] nested VM detected
functional_test.go:1640: (dbg) Run:  kubectl --context functional-018090 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-xk776" [f91ea26d-d13f-42f9-be42-8c9029e55932] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-xk776" [f91ea26d-d13f-42f9-be42-8c9029e55932] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004050275s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31004
functional_test.go:1680: http://192.168.49.2:31004: success! body:
Request served by hello-node-connect-7d85dfc575-xk776

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31004
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b85f9709-ba52-4788-8b0d-5270d6651526] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003454043s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-018090 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-018090 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-018090 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-018090 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [787e17be-abcc-4cc3-8daa-cd9e6f67fb54] Pending
helpers_test.go:353: "sp-pod" [787e17be-abcc-4cc3-8daa-cd9e6f67fb54] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003550666s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-018090 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-018090 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-018090 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [53b4f011-c807-4a4f-b772-b65e678ff7aa] Pending
helpers_test.go:353: "sp-pod" [53b4f011-c807-4a4f-b772-b65e678ff7aa] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004712323s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-018090 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh -n functional-018090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cp functional-018090:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd466588327/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh -n functional-018090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh -n functional-018090 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-018090 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-z8bkg" [b1b94bcf-7dbf-4480-a727-83b1a456ae44] Pending
helpers_test.go:353: "mysql-6bcdcbc558-z8bkg" [b1b94bcf-7dbf-4480-a727-83b1a456ae44] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/12/13 13:13:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "mysql-6bcdcbc558-z8bkg" [b1b94bcf-7dbf-4480-a727-83b1a456ae44] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003463951s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;": exit status 1 (138.144448ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:13:43.059971  394130 retry.go:31] will retry after 1.158673154s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;": exit status 1 (136.168511ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:13:44.355596  394130 retry.go:31] will retry after 1.553427489s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;": exit status 1 (91.746936ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:13:46.001799  394130 retry.go:31] will retry after 1.313360648s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;": exit status 1 (92.083563ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:13:47.408539  394130 retry.go:31] will retry after 4.817051185s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-018090 exec mysql-6bcdcbc558-z8bkg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/394130/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /etc/test/nested/copy/394130/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/394130.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /etc/ssl/certs/394130.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/394130.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /usr/share/ca-certificates/394130.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3941302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /etc/ssl/certs/3941302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3941302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /usr/share/ca-certificates/3941302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-018090 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh "sudo systemctl is-active docker": exit status 1 (315.150434ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh "sudo systemctl is-active containerd": exit status 1 (315.0638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-018090 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"d756edd9cc2ecc423a95b3a44d18c01b440b5a645f820d425e30f72aa700276c","repoDigests":["localhost/minikube-local-cache-test@sha256:38e6f72e4254eeee510a9cc36e5210b9883299b131aa31e37088ad1488ed38ec"],"repoTags":["localhost/minikube-local-cache-test:functional-018090"],"size":"3330"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"6e38f40d628
db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a
4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7
f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-018090"],"size":"4945146"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],
"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"si
ze":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry
.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-018090 image ls --format json --alsologtostderr:
I1213 13:13:32.807148  433455 out.go:360] Setting OutFile to fd 1 ...
I1213 13:13:32.807282  433455 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:32.807295  433455 out.go:374] Setting ErrFile to fd 2...
I1213 13:13:32.807302  433455 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:32.807586  433455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:13:32.808478  433455 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:32.808624  433455 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:32.809378  433455 cli_runner.go:164] Run: docker container inspect functional-018090 --format={{.State.Status}}
I1213 13:13:32.834141  433455 ssh_runner.go:195] Run: systemctl --version
I1213 13:13:32.834210  433455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-018090
I1213 13:13:32.860673  433455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-018090/id_rsa Username:docker}
I1213 13:13:32.967569  433455 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 image ls --format yaml --alsologtostderr: (2.002504929s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-018090 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d756edd9cc2ecc423a95b3a44d18c01b440b5a645f820d425e30f72aa700276c
repoDigests:
- localhost/minikube-local-cache-test@sha256:38e6f72e4254eeee510a9cc36e5210b9883299b131aa31e37088ad1488ed38ec
repoTags:
- localhost/minikube-local-cache-test:functional-018090
size: "3330"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-018090
size: "4945146"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-018090 image ls --format yaml --alsologtostderr:
I1213 13:13:31.181404  432860 out.go:360] Setting OutFile to fd 1 ...
I1213 13:13:31.181607  432860 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:31.181618  432860 out.go:374] Setting ErrFile to fd 2...
I1213 13:13:31.181624  432860 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:31.181887  432860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:13:31.182565  432860 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:31.182717  432860 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:31.183446  432860 cli_runner.go:164] Run: docker container inspect functional-018090 --format={{.State.Status}}
I1213 13:13:31.207884  432860 ssh_runner.go:195] Run: systemctl --version
I1213 13:13:31.208233  432860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-018090
I1213 13:13:31.232820  432860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-018090/id_rsa Username:docker}
I1213 13:13:31.341842  432860 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 13:13:33.089783  432860 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.747896755s)
W1213 13:13:33.092981  432860 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 6f75b8d2-7479-408f-ad3a-0c864cdd8f92
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh pgrep buildkitd: exit status 1 (335.20573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image build -t localhost/my-image:functional-018090 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-018090 image build -t localhost/my-image:functional-018090 testdata/build --alsologtostderr: (4.692499041s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-018090 image build -t localhost/my-image:functional-018090 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 77289f5bff1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-018090
--> e3c7327299f
Successfully tagged localhost/my-image:functional-018090
e3c7327299f1c96544c6cc53bedf6478b0a78d1fb1f348ca958ffe2c7894908b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-018090 image build -t localhost/my-image:functional-018090 testdata/build --alsologtostderr:
I1213 13:13:33.507301  433809 out.go:360] Setting OutFile to fd 1 ...
I1213 13:13:33.507640  433809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:33.507654  433809 out.go:374] Setting ErrFile to fd 2...
I1213 13:13:33.507661  433809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:13:33.507966  433809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:13:33.508610  433809 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:33.509455  433809 config.go:182] Loaded profile config "functional-018090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:13:33.510048  433809 cli_runner.go:164] Run: docker container inspect functional-018090 --format={{.State.Status}}
I1213 13:13:33.532565  433809 ssh_runner.go:195] Run: systemctl --version
I1213 13:13:33.532626  433809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-018090
I1213 13:13:33.553669  433809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-018090/id_rsa Username:docker}
I1213 13:13:33.663942  433809 build_images.go:162] Building image from path: /tmp/build.1323302388.tar
I1213 13:13:33.664104  433809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 13:13:33.674550  433809 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1323302388.tar
I1213 13:13:33.679637  433809 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1323302388.tar: stat -c "%s %y" /var/lib/minikube/build/build.1323302388.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1323302388.tar': No such file or directory
I1213 13:13:33.679675  433809 ssh_runner.go:362] scp /tmp/build.1323302388.tar --> /var/lib/minikube/build/build.1323302388.tar (3072 bytes)
I1213 13:13:33.703189  433809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1323302388
I1213 13:13:33.713147  433809 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1323302388 -xf /var/lib/minikube/build/build.1323302388.tar
I1213 13:13:33.723253  433809 crio.go:315] Building image: /var/lib/minikube/build/build.1323302388
I1213 13:13:33.723322  433809 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-018090 /var/lib/minikube/build/build.1323302388 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 13:13:38.100489  433809 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-018090 /var/lib/minikube/build/build.1323302388 --cgroup-manager=cgroupfs: (4.377141397s)
I1213 13:13:38.100549  433809 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1323302388
I1213 13:13:38.109700  433809 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1323302388.tar
I1213 13:13:38.117815  433809 build_images.go:218] Built localhost/my-image:functional-018090 from /tmp/build.1323302388.tar
I1213 13:13:38.117856  433809 build_images.go:134] succeeded building to: functional-018090
I1213 13:13:38.117862  433809 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-018090
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-018090 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-018090 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-bvc6m" [d96a77c2-b58c-4b97-8324-4dae1b5d0ade] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-bvc6m" [d96a77c2-b58c-4b97-8324-4dae1b5d0ade] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.00372027s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-018090 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-018090 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-018090 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-018090 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 427322: os: process already finished
helpers_test.go:520: unable to terminate pid 427034: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image load --daemon kicbase/echo-server:functional-018090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-018090 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-018090 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [f59e8cd7-7b5c-4ce7-818c-215468f5702f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [f59e8cd7-7b5c-4ce7-818c-215468f5702f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003045586s
I1213 13:13:19.992623  394130 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image load --daemon kicbase/echo-server:functional-018090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls
E1213 13:13:13.517347  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-018090
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image load --daemon kicbase/echo-server:functional-018090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image save kicbase/echo-server:functional-018090 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image rm kicbase/echo-server:functional-018090 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-018090
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 image save --daemon kicbase/echo-server:functional-018090 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-018090
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-018090 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.17.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-018090 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 service list -o json
functional_test.go:1504: Took "515.523428ms" to run "out/minikube-linux-amd64 -p functional-018090 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30635
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30635
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "348.905405ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.378114ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdany-port950052724/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765631601919744810" to /tmp/TestFunctionalparallelMountCmdany-port950052724/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765631601919744810" to /tmp/TestFunctionalparallelMountCmdany-port950052724/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765631601919744810" to /tmp/TestFunctionalparallelMountCmdany-port950052724/001/test-1765631601919744810
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (291.892333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:13:22.211991  394130 retry.go:31] will retry after 663.144916ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 13:13 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 13:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 13:13 test-1765631601919744810
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh cat /mount-9p/test-1765631601919744810
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-018090 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [abc63605-2042-4cb5-94f3-e5820d26ff59] Pending
I1213 13:13:23.975996  394130 detect.go:223] nested VM detected
helpers_test.go:353: "busybox-mount" [abc63605-2042-4cb5-94f3-e5820d26ff59] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [abc63605-2042-4cb5-94f3-e5820d26ff59] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [abc63605-2042-4cb5-94f3-e5820d26ff59] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003316967s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-018090 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdany-port950052724/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "363.316103ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.965933ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdspecific-port1425545898/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.92138ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:13:32.470142  394130 retry.go:31] will retry after 654.091674ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdspecific-port1425545898/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh "sudo umount -f /mount-9p": exit status 1 (329.736523ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-018090 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdspecific-port1425545898/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3164542437/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3164542437/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3164542437/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T" /mount1: exit status 1 (396.789359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:13:34.729019  394130 retry.go:31] will retry after 262.298073ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-018090 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-018090 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3164542437/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3164542437/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-018090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3164542437/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-018090
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-018090
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-018090
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-390571/.minikube/files/etc/test/nested/copy/394130/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (35.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728225 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-728225 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (35.235944264s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (35.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 13:14:30.645330  394130 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728225 --alsologtostderr -v=8
E1213 13:14:35.439321  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-728225 --alsologtostderr -v=8: (6.019219766s)
functional_test.go:678: soft start took 6.01955733s for "functional-728225" cluster.
I1213 13:14:36.664922  394130 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-728225 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3309824329/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cache add minikube-local-cache-test:functional-728225
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cache delete minikube-local-cache-test:functional-728225
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-728225
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.756551ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 kubectl -- --context functional-728225 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-728225 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (47.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-728225 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.136009127s)
functional_test.go:776: restart took 47.136165989s for "functional-728225" cluster.
I1213 13:15:29.948187  394130 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (47.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-728225 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-728225 logs: (1.183636535s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4124872282/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-728225 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4124872282/001/logs.txt: (1.188522066s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-728225 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-728225
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-728225: exit status 115 (343.201068ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31854 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-728225 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 config get cpus: exit status 14 (75.833559ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 config get cpus: exit status 14 (65.75268ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728225 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728225 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 451176: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (159.409686ms)

                                                
                                                
-- stdout --
	* [functional-728225] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:16:01.752445  450765 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:16:01.752536  450765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:16:01.752541  450765 out.go:374] Setting ErrFile to fd 2...
	I1213 13:16:01.752546  450765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:16:01.752792  450765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:16:01.753235  450765 out.go:368] Setting JSON to false
	I1213 13:16:01.754178  450765 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7110,"bootTime":1765624652,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:16:01.754234  450765 start.go:143] virtualization: kvm guest
	I1213 13:16:01.755816  450765 out.go:179] * [functional-728225] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:16:01.757107  450765 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:16:01.757156  450765 notify.go:221] Checking for updates...
	I1213 13:16:01.759338  450765 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:16:01.760366  450765 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:16:01.761489  450765 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:16:01.762586  450765 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:16:01.763572  450765 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:16:01.764949  450765 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:16:01.765452  450765 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:16:01.788345  450765 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:16:01.788437  450765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:16:01.846041  450765 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:16:01.835826467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:16:01.846197  450765 docker.go:319] overlay module found
	I1213 13:16:01.847874  450765 out.go:179] * Using the docker driver based on existing profile
	I1213 13:16:01.849012  450765 start.go:309] selected driver: docker
	I1213 13:16:01.849031  450765 start.go:927] validating driver "docker" against &{Name:functional-728225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-728225 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:01.849132  450765 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:16:01.850850  450765 out.go:203] 
	W1213 13:16:01.851992  450765 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 13:16:01.853101  450765 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728225 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728225 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (172.915263ms)

                                                
                                                
-- stdout --
	* [functional-728225] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:15:59.914259  449854 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:15:59.914487  449854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:15:59.914495  449854 out.go:374] Setting ErrFile to fd 2...
	I1213 13:15:59.914500  449854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:15:59.914796  449854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:15:59.915217  449854 out.go:368] Setting JSON to false
	I1213 13:15:59.916187  449854 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7108,"bootTime":1765624652,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:15:59.916242  449854 start.go:143] virtualization: kvm guest
	I1213 13:15:59.917782  449854 out.go:179] * [functional-728225] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 13:15:59.919258  449854 notify.go:221] Checking for updates...
	I1213 13:15:59.919361  449854 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:15:59.920733  449854 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:15:59.921804  449854 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:15:59.922857  449854 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:15:59.924029  449854 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:15:59.925020  449854 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:15:59.926448  449854 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:15:59.927087  449854 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:15:59.950333  449854 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:15:59.950407  449854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:16:00.009505  449854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 13:15:59.998183928 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:16:00.009662  449854 docker.go:319] overlay module found
	I1213 13:16:00.013901  449854 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 13:16:00.015210  449854 start.go:309] selected driver: docker
	I1213 13:16:00.015225  449854 start.go:927] validating driver "docker" against &{Name:functional-728225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-728225 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 13:16:00.015317  449854 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:16:00.017008  449854 out.go:203] 
	W1213 13:16:00.018107  449854 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 13:16:00.019170  449854 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-728225 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-728225 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-fmc6l" [8c249112-705d-40a5-afbf-f893151fc2c6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-fmc6l" [8c249112-705d-40a5-afbf-f893151fc2c6] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004167491s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31970
functional_test.go:1680: http://192.168.49.2:31970: success! body:
Request served by hello-node-connect-9f67c86d4-fmc6l

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31970
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (20.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f7de1310-7aed-4adf-a3b5-e416e031dd46] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007256521s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-728225 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-728225 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-728225 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-728225 apply -f testdata/storage-provisioner/pod.yaml
I1213 13:15:50.691716  394130 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4dac65e6-ae88-40eb-a9e7-f84c47b3d279] Pending
helpers_test.go:353: "sp-pod" [4dac65e6-ae88-40eb-a9e7-f84c47b3d279] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [4dac65e6-ae88-40eb-a9e7-f84c47b3d279] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003605961s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-728225 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-728225 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-728225 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ce797e50-74a8-4c2f-8743-eed314a8be79] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [ce797e50-74a8-4c2f-8743-eed314a8be79] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004056454s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-728225 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (20.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh -n functional-728225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cp functional-728225:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp600942597/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh -n functional-728225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh -n functional-728225 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (44.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-728225 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-klh5r" [0f8149aa-0171-4018-8a59-95b827fdf5cf] Pending
helpers_test.go:353: "mysql-7d7b65bc95-klh5r" [0f8149aa-0171-4018-8a59-95b827fdf5cf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-klh5r" [0f8149aa-0171-4018-8a59-95b827fdf5cf] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 41.003207155s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;": exit status 1 (100.132411ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:17.956029  394130 retry.go:31] will retry after 651.9861ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;": exit status 1 (89.697776ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:18.698852  394130 retry.go:31] will retry after 1.298701511s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;": exit status 1 (87.948486ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 13:16:20.086510  394130 retry.go:31] will retry after 1.207854019s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-728225 exec mysql-7d7b65bc95-klh5r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (44.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/394130/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /etc/test/nested/copy/394130/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/394130.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /etc/ssl/certs/394130.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/394130.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /usr/share/ca-certificates/394130.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3941302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /etc/ssl/certs/3941302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3941302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /usr/share/ca-certificates/3941302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-728225 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh "sudo systemctl is-active docker": exit status 1 (318.25276ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh "sudo systemctl is-active containerd": exit status 1 (320.268274ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728225 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-728225
localhost/kicbase/echo-server:functional-728225
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728225 image ls --format short --alsologtostderr:
I1213 13:16:06.303728  452253 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:06.304071  452253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.304083  452253 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:06.304089  452253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.304368  452253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:16:06.305114  452253 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:06.305252  452253 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:06.305821  452253 cli_runner.go:164] Run: docker container inspect functional-728225 --format={{.State.Status}}
I1213 13:16:06.328536  452253 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:06.328604  452253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728225
I1213 13:16:06.354238  452253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-728225/id_rsa Username:docker}
I1213 13:16:06.462476  452253 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728225 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-728225  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-728225  │ d756edd9cc2ec │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728225 image ls --format table --alsologtostderr:
I1213 13:16:08.491280  453595 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:08.491535  453595 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:08.491546  453595 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:08.491553  453595 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:08.491817  453595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:16:08.492433  453595 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:08.492562  453595 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:08.493123  453595 cli_runner.go:164] Run: docker container inspect functional-728225 --format={{.State.Status}}
I1213 13:16:08.511620  453595 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:08.511685  453595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728225
I1213 13:16:08.531273  453595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-728225/id_rsa Username:docker}
I1213 13:16:08.627202  453595 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728225 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"aa5e3ebc0dfed0566805186b9e471
10d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d756edd9cc2ecc423a95b3a44d18c01b440b5a645f820d425e30f72aa700276c","repoDigests":["localhost/minikube-local-cache-test@sha256:38e6f72e4254eeee510a9cc36e5210b9883299b131aa31e37088ad1488ed38ec"],"repoTags":["localhost/minikube-local-cache-test:functional-728225"],"size":"3330"},{"id":"a
3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDi
gests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"aa9d02839d8def718798bd410c88
aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899a
e1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-728225"],"size":"4945246"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5
af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e1
1276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728225 image ls --format json --alsologtostderr:
I1213 13:16:08.238771  453337 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:08.238922  453337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:08.238936  453337 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:08.238942  453337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:08.239209  453337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:16:08.240195  453337 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:08.240425  453337 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:08.241521  453337 cli_runner.go:164] Run: docker container inspect functional-728225 --format={{.State.Status}}
I1213 13:16:08.263220  453337 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:08.263298  453337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728225
I1213 13:16:08.284046  453337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-728225/id_rsa Username:docker}
I1213 13:16:08.386953  453337 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728225 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: d756edd9cc2ecc423a95b3a44d18c01b440b5a645f820d425e30f72aa700276c
repoDigests:
- localhost/minikube-local-cache-test@sha256:38e6f72e4254eeee510a9cc36e5210b9883299b131aa31e37088ad1488ed38ec
repoTags:
- localhost/minikube-local-cache-test:functional-728225
size: "3330"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-728225
size: "4945246"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728225 image ls --format yaml --alsologtostderr:
I1213 13:16:06.655546  452422 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:06.655802  452422 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.655814  452422 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:06.655820  452422 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:06.656058  452422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:16:06.656578  452422 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:06.656676  452422 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:06.657092  452422 cli_runner.go:164] Run: docker container inspect functional-728225 --format={{.State.Status}}
I1213 13:16:06.684002  452422 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:06.684066  452422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728225
I1213 13:16:06.710678  452422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-728225/id_rsa Username:docker}
I1213 13:16:06.822863  452422 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh pgrep buildkitd: exit status 1 (300.624028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image build -t localhost/my-image:functional-728225 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-728225 image build -t localhost/my-image:functional-728225 testdata/build --alsologtostderr: (1.747869232s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728225 image build -t localhost/my-image:functional-728225 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 11062dcb444
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-728225
--> fc830920e21
Successfully tagged localhost/my-image:functional-728225
fc830920e21bcdd06565a8be74aa046a22ce90c68ed52c42d052c387c3732d66
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728225 image build -t localhost/my-image:functional-728225 testdata/build --alsologtostderr:
I1213 13:16:07.237498  452990 out.go:360] Setting OutFile to fd 1 ...
I1213 13:16:07.237753  452990 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:07.237762  452990 out.go:374] Setting ErrFile to fd 2...
I1213 13:16:07.237767  452990 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:16:07.237977  452990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
I1213 13:16:07.238515  452990 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:07.239227  452990 config.go:182] Loaded profile config "functional-728225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 13:16:07.239702  452990 cli_runner.go:164] Run: docker container inspect functional-728225 --format={{.State.Status}}
I1213 13:16:07.258211  452990 ssh_runner.go:195] Run: systemctl --version
I1213 13:16:07.258258  452990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728225
I1213 13:16:07.275208  452990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/functional-728225/id_rsa Username:docker}
I1213 13:16:07.370962  452990 build_images.go:162] Building image from path: /tmp/build.1044578032.tar
I1213 13:16:07.371072  452990 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 13:16:07.380452  452990 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1044578032.tar
I1213 13:16:07.384173  452990 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1044578032.tar: stat -c "%s %y" /var/lib/minikube/build/build.1044578032.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1044578032.tar': No such file or directory
I1213 13:16:07.384203  452990 ssh_runner.go:362] scp /tmp/build.1044578032.tar --> /var/lib/minikube/build/build.1044578032.tar (3072 bytes)
I1213 13:16:07.402966  452990 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1044578032
I1213 13:16:07.410940  452990 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1044578032 -xf /var/lib/minikube/build/build.1044578032.tar
I1213 13:16:07.419607  452990 crio.go:315] Building image: /var/lib/minikube/build/build.1044578032
I1213 13:16:07.419661  452990 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-728225 /var/lib/minikube/build/build.1044578032 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 13:16:08.905331  452990 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-728225 /var/lib/minikube/build/build.1044578032 --cgroup-manager=cgroupfs: (1.485641413s)
I1213 13:16:08.905403  452990 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1044578032
I1213 13:16:08.914060  452990 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1044578032.tar
I1213 13:16:08.922634  452990 build_images.go:218] Built localhost/my-image:functional-728225 from /tmp/build.1044578032.tar
I1213 13:16:08.922663  452990 build_images.go:134] succeeded building to: functional-728225
I1213 13:16:08.922670  452990 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-728225
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image load --daemon kicbase/echo-server:functional-728225 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image load --daemon kicbase/echo-server:functional-728225 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728225 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728225 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728225 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728225 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 446509: os: process already finished
helpers_test.go:526: unable to kill pid 446280: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728225 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (17.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-728225 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6fbf595e-0cf4-4863-ad14-b5c62a2caf26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6fbf595e-0cf4-4863-ad14-b5c62a2caf26] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003560981s
I1213 13:15:57.239202  394130 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (17.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image save kicbase/echo-server:functional-728225 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-728225 image save kicbase/echo-server:functional-728225 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.335408281s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image rm kicbase/echo-server:functional-728225 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-728225
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 image save --daemon kicbase/echo-server:functional-728225 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-728225
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-728225 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-728225 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-g4tmr" [144960ff-cd5e-4c50-ac78-9aa2215b211b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-g4tmr" [144960ff-cd5e-4c50-ac78-9aa2215b211b] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003382607s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 service list -o json
functional_test.go:1504: Took "900.632922ms" to run "out/minikube-linux-amd64 -p functional-728225 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31514
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-728225 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.200.83 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-728225 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "334.391941ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.270864ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "359.736607ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.204196ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31514
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo239637620/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765631758655465448" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo239637620/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765631758655465448" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo239637620/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765631758655465448" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo239637620/001/test-1765631758655465448
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.83587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:15:58.960610  394130 retry.go:31] will retry after 422.87621ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T /mount-9p | grep 9p"
I1213 13:15:59.420994  394130 detect.go:223] nested VM detected
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 13:15 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 13:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 13:15 test-1765631758655465448
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh cat /mount-9p/test-1765631758655465448
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-728225 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [b2ddd40b-8bb6-46cb-9108-a39d884af2fe] Pending
helpers_test.go:353: "busybox-mount" [b2ddd40b-8bb6-46cb-9108-a39d884af2fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [b2ddd40b-8bb6-46cb-9108-a39d884af2fe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [b2ddd40b-8bb6-46cb-9108-a39d884af2fe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003872182s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-728225 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo239637620/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (5.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4182374982/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.416016ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:16:04.759484  394130 retry.go:31] will retry after 679.17184ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4182374982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh "sudo umount -f /mount-9p": exit status 1 (331.293908ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-728225 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4182374982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3197110228/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3197110228/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3197110228/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T" /mount1: exit status 1 (388.852805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 13:16:07.057199  394130 retry.go:31] will retry after 577.242937ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T" /mount2
2025/12/13 13:16:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728225 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-728225 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3197110228/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3197110228/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728225 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3197110228/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-728225
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-728225
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-728225
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (140.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 13:16:51.580651  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:17:19.282162  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:10.802047  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:10.808468  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:10.819837  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:10.841240  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:10.882597  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:10.964027  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:11.125527  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:11.447202  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:12.088972  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:13.371004  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:15.932462  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:21.054592  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:18:31.296895  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m19.29926024s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (140.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 kubectl -- rollout status deployment/busybox: (2.040707986s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-948m9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-f5xvx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-xkklv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-948m9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-f5xvx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-xkklv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-948m9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-f5xvx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-xkklv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-948m9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-948m9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-f5xvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-f5xvx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-xkklv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 kubectl -- exec busybox-7b57f96db7-xkklv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node add --alsologtostderr -v 5
E1213 13:18:51.778807  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 node add --alsologtostderr -v 5: (23.16169198s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-152321 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp testdata/cp-test.txt ha-152321:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1272293237/001/cp-test_ha-152321.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321:/home/docker/cp-test.txt ha-152321-m02:/home/docker/cp-test_ha-152321_ha-152321-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test_ha-152321_ha-152321-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321:/home/docker/cp-test.txt ha-152321-m03:/home/docker/cp-test_ha-152321_ha-152321-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test_ha-152321_ha-152321-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321:/home/docker/cp-test.txt ha-152321-m04:/home/docker/cp-test_ha-152321_ha-152321-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test_ha-152321_ha-152321-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp testdata/cp-test.txt ha-152321-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1272293237/001/cp-test_ha-152321-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m02:/home/docker/cp-test.txt ha-152321:/home/docker/cp-test_ha-152321-m02_ha-152321.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test_ha-152321-m02_ha-152321.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m02:/home/docker/cp-test.txt ha-152321-m03:/home/docker/cp-test_ha-152321-m02_ha-152321-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test_ha-152321-m02_ha-152321-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m02:/home/docker/cp-test.txt ha-152321-m04:/home/docker/cp-test_ha-152321-m02_ha-152321-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test_ha-152321-m02_ha-152321-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp testdata/cp-test.txt ha-152321-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1272293237/001/cp-test_ha-152321-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m03:/home/docker/cp-test.txt ha-152321:/home/docker/cp-test_ha-152321-m03_ha-152321.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test_ha-152321-m03_ha-152321.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m03:/home/docker/cp-test.txt ha-152321-m02:/home/docker/cp-test_ha-152321-m03_ha-152321-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test_ha-152321-m03_ha-152321-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m03:/home/docker/cp-test.txt ha-152321-m04:/home/docker/cp-test_ha-152321-m03_ha-152321-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test_ha-152321-m03_ha-152321-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp testdata/cp-test.txt ha-152321-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1272293237/001/cp-test_ha-152321-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m04:/home/docker/cp-test.txt ha-152321:/home/docker/cp-test_ha-152321-m04_ha-152321.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321 "sudo cat /home/docker/cp-test_ha-152321-m04_ha-152321.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m04:/home/docker/cp-test.txt ha-152321-m02:/home/docker/cp-test_ha-152321-m04_ha-152321-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m02 "sudo cat /home/docker/cp-test_ha-152321-m04_ha-152321-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 cp ha-152321-m04:/home/docker/cp-test.txt ha-152321-m03:/home/docker/cp-test_ha-152321-m04_ha-152321-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 ssh -n ha-152321-m03 "sudo cat /home/docker/cp-test_ha-152321-m04_ha-152321-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node stop m02 --alsologtostderr -v 5
E1213 13:19:32.740986  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 node stop m02 --alsologtostderr -v 5: (13.467054004s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5: exit status 7 (689.10206ms)

                                                
                                                
-- stdout --
	ha-152321
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-152321-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-152321-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-152321-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:19:45.028869  474329 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:19:45.029112  474329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:19:45.029121  474329 out.go:374] Setting ErrFile to fd 2...
	I1213 13:19:45.029125  474329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:19:45.029309  474329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:19:45.029509  474329 out.go:368] Setting JSON to false
	I1213 13:19:45.029538  474329 mustload.go:66] Loading cluster: ha-152321
	I1213 13:19:45.029699  474329 notify.go:221] Checking for updates...
	I1213 13:19:45.029879  474329 config.go:182] Loaded profile config "ha-152321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:19:45.029895  474329 status.go:174] checking status of ha-152321 ...
	I1213 13:19:45.030368  474329 cli_runner.go:164] Run: docker container inspect ha-152321 --format={{.State.Status}}
	I1213 13:19:45.051562  474329 status.go:371] ha-152321 host status = "Running" (err=<nil>)
	I1213 13:19:45.051608  474329 host.go:66] Checking if "ha-152321" exists ...
	I1213 13:19:45.051960  474329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-152321
	I1213 13:19:45.071283  474329 host.go:66] Checking if "ha-152321" exists ...
	I1213 13:19:45.071554  474329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:19:45.071627  474329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-152321
	I1213 13:19:45.090250  474329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/ha-152321/id_rsa Username:docker}
	I1213 13:19:45.183048  474329 ssh_runner.go:195] Run: systemctl --version
	I1213 13:19:45.190284  474329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:19:45.202751  474329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:19:45.262200  474329 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 13:19:45.252950465 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:19:45.262753  474329 kubeconfig.go:125] found "ha-152321" server: "https://192.168.49.254:8443"
	I1213 13:19:45.262804  474329 api_server.go:166] Checking apiserver status ...
	I1213 13:19:45.262853  474329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:19:45.274255  474329 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1213 13:19:45.282737  474329 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:19:45.282795  474329 ssh_runner.go:195] Run: ls
	I1213 13:19:45.286945  474329 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 13:19:45.292156  474329 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 13:19:45.292178  474329 status.go:463] ha-152321 apiserver status = Running (err=<nil>)
	I1213 13:19:45.292188  474329 status.go:176] ha-152321 status: &{Name:ha-152321 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:19:45.292205  474329 status.go:174] checking status of ha-152321-m02 ...
	I1213 13:19:45.292504  474329 cli_runner.go:164] Run: docker container inspect ha-152321-m02 --format={{.State.Status}}
	I1213 13:19:45.310204  474329 status.go:371] ha-152321-m02 host status = "Stopped" (err=<nil>)
	I1213 13:19:45.310222  474329 status.go:384] host is not running, skipping remaining checks
	I1213 13:19:45.310228  474329 status.go:176] ha-152321-m02 status: &{Name:ha-152321-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:19:45.310245  474329 status.go:174] checking status of ha-152321-m03 ...
	I1213 13:19:45.310514  474329 cli_runner.go:164] Run: docker container inspect ha-152321-m03 --format={{.State.Status}}
	I1213 13:19:45.327203  474329 status.go:371] ha-152321-m03 host status = "Running" (err=<nil>)
	I1213 13:19:45.327224  474329 host.go:66] Checking if "ha-152321-m03" exists ...
	I1213 13:19:45.327461  474329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-152321-m03
	I1213 13:19:45.343290  474329 host.go:66] Checking if "ha-152321-m03" exists ...
	I1213 13:19:45.343540  474329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:19:45.343614  474329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-152321-m03
	I1213 13:19:45.360507  474329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/ha-152321-m03/id_rsa Username:docker}
	I1213 13:19:45.454256  474329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:19:45.467309  474329 kubeconfig.go:125] found "ha-152321" server: "https://192.168.49.254:8443"
	I1213 13:19:45.467340  474329 api_server.go:166] Checking apiserver status ...
	I1213 13:19:45.467384  474329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:19:45.478057  474329 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W1213 13:19:45.486171  474329 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:19:45.486220  474329 ssh_runner.go:195] Run: ls
	I1213 13:19:45.489955  474329 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 13:19:45.494113  474329 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 13:19:45.494136  474329 status.go:463] ha-152321-m03 apiserver status = Running (err=<nil>)
	I1213 13:19:45.494146  474329 status.go:176] ha-152321-m03 status: &{Name:ha-152321-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:19:45.494166  474329 status.go:174] checking status of ha-152321-m04 ...
	I1213 13:19:45.494401  474329 cli_runner.go:164] Run: docker container inspect ha-152321-m04 --format={{.State.Status}}
	I1213 13:19:45.513089  474329 status.go:371] ha-152321-m04 host status = "Running" (err=<nil>)
	I1213 13:19:45.513108  474329 host.go:66] Checking if "ha-152321-m04" exists ...
	I1213 13:19:45.513403  474329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-152321-m04
	I1213 13:19:45.531078  474329 host.go:66] Checking if "ha-152321-m04" exists ...
	I1213 13:19:45.531389  474329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:19:45.531439  474329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-152321-m04
	I1213 13:19:45.549330  474329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/ha-152321-m04/id_rsa Username:docker}
	I1213 13:19:45.643815  474329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:19:45.655704  474329 status.go:176] ha-152321-m04 status: &{Name:ha-152321-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 node start m02 --alsologtostderr -v 5: (13.587104367s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 stop --alsologtostderr -v 5
E1213 13:20:36.824008  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:36.832127  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:36.843637  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:36.865203  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:36.906631  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:36.988118  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:37.149835  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:37.472438  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:38.114021  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:39.396030  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:41.958181  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:47.079718  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:20:54.664543  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 stop --alsologtostderr -v 5: (55.25790641s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 start --wait true --alsologtostderr -v 5
E1213 13:20:57.321414  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:21:17.803265  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:21:51.580042  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 start --wait true --alsologtostderr -v 5: (59.590483131s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node delete m03 --alsologtostderr -v 5
E1213 13:21:58.765957  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 node delete m03 --alsologtostderr -v 5: (9.693354299s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 stop --alsologtostderr -v 5: (41.290479935s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5: exit status 7 (113.333187ms)

                                                
                                                
-- stdout --
	ha-152321
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-152321-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-152321-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:22:49.274291  488537 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:22:49.274548  488537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:22:49.274556  488537 out.go:374] Setting ErrFile to fd 2...
	I1213 13:22:49.274560  488537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:22:49.274745  488537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:22:49.274920  488537 out.go:368] Setting JSON to false
	I1213 13:22:49.274945  488537 mustload.go:66] Loading cluster: ha-152321
	I1213 13:22:49.275087  488537 notify.go:221] Checking for updates...
	I1213 13:22:49.275299  488537 config.go:182] Loaded profile config "ha-152321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:22:49.275314  488537 status.go:174] checking status of ha-152321 ...
	I1213 13:22:49.275752  488537 cli_runner.go:164] Run: docker container inspect ha-152321 --format={{.State.Status}}
	I1213 13:22:49.293948  488537 status.go:371] ha-152321 host status = "Stopped" (err=<nil>)
	I1213 13:22:49.293991  488537 status.go:384] host is not running, skipping remaining checks
	I1213 13:22:49.293999  488537 status.go:176] ha-152321 status: &{Name:ha-152321 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:22:49.294052  488537 status.go:174] checking status of ha-152321-m02 ...
	I1213 13:22:49.294388  488537 cli_runner.go:164] Run: docker container inspect ha-152321-m02 --format={{.State.Status}}
	I1213 13:22:49.311026  488537 status.go:371] ha-152321-m02 host status = "Stopped" (err=<nil>)
	I1213 13:22:49.311043  488537 status.go:384] host is not running, skipping remaining checks
	I1213 13:22:49.311049  488537 status.go:176] ha-152321-m02 status: &{Name:ha-152321-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:22:49.311065  488537 status.go:174] checking status of ha-152321-m04 ...
	I1213 13:22:49.311284  488537 cli_runner.go:164] Run: docker container inspect ha-152321-m04 --format={{.State.Status}}
	I1213 13:22:49.327666  488537 status.go:371] ha-152321-m04 host status = "Stopped" (err=<nil>)
	I1213 13:22:49.327684  488537 status.go:384] host is not running, skipping remaining checks
	I1213 13:22:49.327693  488537 status.go:176] ha-152321-m04 status: &{Name:ha-152321-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 13:23:10.801871  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:23:20.688220  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:23:38.506615  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.459472199s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-152321 node add --control-plane --alsologtostderr -v 5: (36.95614453s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-152321 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-664471 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-664471 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.487971342s)
--- PASS: TestJSONOutput/start/Command (40.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-664471 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-664471 --output=json --user=testUser: (7.98652628s)
--- PASS: TestJSONOutput/stop/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-278145 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-278145 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.075746ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ae0ae47-4bdb-4258-b9ea-4ab3abbbb887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-278145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"17fa34c6-7e2e-43ea-bcee-6deefe35bef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"8eefe907-b306-4e78-8425-f50311acb459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb9f7a6d-d5ea-4121-8f2b-48a8c21211db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig"}}
	{"specversion":"1.0","id":"702c9159-8a53-4d6a-b3dc-7ef14565a1a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube"}}
	{"specversion":"1.0","id":"884b7f3a-c7bb-4a82-a0b9-e6dc06b32631","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"277b046f-274d-42de-9e4c-38b39ebbbf83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"45ef55da-ac08-4f8e-86c4-7250e9cf448e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-278145" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-278145
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-386675 --network=
E1213 13:25:36.823984  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-386675 --network=: (23.315130382s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-386675" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-386675
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-386675: (2.094652711s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-366831 --network=bridge
E1213 13:26:04.532278  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-366831 --network=bridge: (20.345904089s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-366831" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-366831
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-366831: (1.988131384s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.36s)

                                                
                                    
x
+
TestKicExistingNetwork (25.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 13:26:13.914204  394130 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 13:26:13.930208  394130 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 13:26:13.930276  394130 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 13:26:13.930306  394130 cli_runner.go:164] Run: docker network inspect existing-network
W1213 13:26:13.946013  394130 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 13:26:13.946038  394130 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 13:26:13.946057  394130 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 13:26:13.946201  394130 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 13:26:13.963653  394130 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-90c6185d3a1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:d7:d8:45:ed:62} reservation:<nil>}
I1213 13:26:13.964129  394130 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cafc70}
I1213 13:26:13.964157  394130 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 13:26:13.964195  394130 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 13:26:14.012233  394130 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-260036 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-260036 --network=existing-network: (23.880837837s)
helpers_test.go:176: Cleaning up "existing-network-260036" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-260036
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-260036: (1.970663865s)
I1213 13:26:39.881242  394130 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.98s)

                                                
                                    
x
+
TestKicCustomSubnet (26.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-008337 --subnet=192.168.60.0/24
E1213 13:26:51.579796  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-008337 --subnet=192.168.60.0/24: (24.101308132s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-008337 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-008337" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-008337
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-008337: (2.102935378s)
--- PASS: TestKicCustomSubnet (26.22s)

                                                
                                    
x
+
TestKicStaticIP (23.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-975311 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-975311 --static-ip=192.168.200.200: (21.249076955s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-975311 ip
helpers_test.go:176: Cleaning up "static-ip-975311" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-975311
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-975311: (2.091796994s)
--- PASS: TestKicStaticIP (23.49s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (44.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-856042 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-856042 --driver=docker  --container-runtime=crio: (18.423009267s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-859698 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-859698 --driver=docker  --container-runtime=crio: (20.18033386s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-856042
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-859698
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-859698" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-859698
E1213 13:28:10.802205  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-859698: (2.327599493s)
helpers_test.go:176: Cleaning up "first-856042" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-856042
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-856042: (2.342786612s)
--- PASS: TestMinikubeProfile (44.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-562387 --memory=3072 --mount-string /tmp/TestMountStartserial3952282509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1213 13:28:14.643694  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-562387 --memory=3072 --mount-string /tmp/TestMountStartserial3952282509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.652173254s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-562387 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-581715 --memory=3072 --mount-string /tmp/TestMountStartserial3952282509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-581715 --memory=3072 --mount-string /tmp/TestMountStartserial3952282509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.813798851s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-581715 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-562387 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-562387 --alsologtostderr -v=5: (1.654630264s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-581715 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-581715
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-581715: (1.253263828s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-581715
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-581715: (6.293418089s)
--- PASS: TestMountStart/serial/RestartStopped (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-581715 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (58.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901224 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901224 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (58.125454684s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (58.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-901224 -- rollout status deployment/busybox: (1.869666043s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-jgzw5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-rj9n6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-jgzw5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-rj9n6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-jgzw5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-rj9n6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-jgzw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-jgzw5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-rj9n6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-901224 -- exec busybox-7b57f96db7-rj9n6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-901224 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-901224 -v=5 --alsologtostderr: (25.386006497s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-901224 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp testdata/cp-test.txt multinode-901224:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile935339519/001/cp-test_multinode-901224.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224:/home/docker/cp-test.txt multinode-901224-m02:/home/docker/cp-test_multinode-901224_multinode-901224-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m02 "sudo cat /home/docker/cp-test_multinode-901224_multinode-901224-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224:/home/docker/cp-test.txt multinode-901224-m03:/home/docker/cp-test_multinode-901224_multinode-901224-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m03 "sudo cat /home/docker/cp-test_multinode-901224_multinode-901224-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp testdata/cp-test.txt multinode-901224-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile935339519/001/cp-test_multinode-901224-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224-m02:/home/docker/cp-test.txt multinode-901224:/home/docker/cp-test_multinode-901224-m02_multinode-901224.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224 "sudo cat /home/docker/cp-test_multinode-901224-m02_multinode-901224.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224-m02:/home/docker/cp-test.txt multinode-901224-m03:/home/docker/cp-test_multinode-901224-m02_multinode-901224-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m03 "sudo cat /home/docker/cp-test_multinode-901224-m02_multinode-901224-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp testdata/cp-test.txt multinode-901224-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile935339519/001/cp-test_multinode-901224-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224-m03:/home/docker/cp-test.txt multinode-901224:/home/docker/cp-test_multinode-901224-m03_multinode-901224.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224 "sudo cat /home/docker/cp-test_multinode-901224-m03_multinode-901224.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 cp multinode-901224-m03:/home/docker/cp-test.txt multinode-901224-m02:/home/docker/cp-test_multinode-901224-m03_multinode-901224-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 ssh -n multinode-901224-m02 "sudo cat /home/docker/cp-test_multinode-901224-m03_multinode-901224-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-901224 node stop m03: (1.256643269s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901224 status: exit status 7 (486.482907ms)

                                                
                                                
-- stdout --
	multinode-901224
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-901224-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-901224-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr: exit status 7 (483.087986ms)

                                                
                                                
-- stdout --
	multinode-901224
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-901224-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-901224-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:30:23.497828  548125 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:30:23.497954  548125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:30:23.497965  548125 out.go:374] Setting ErrFile to fd 2...
	I1213 13:30:23.497970  548125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:30:23.498172  548125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:30:23.498359  548125 out.go:368] Setting JSON to false
	I1213 13:30:23.498386  548125 mustload.go:66] Loading cluster: multinode-901224
	I1213 13:30:23.498513  548125 notify.go:221] Checking for updates...
	I1213 13:30:23.498742  548125 config.go:182] Loaded profile config "multinode-901224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:30:23.498754  548125 status.go:174] checking status of multinode-901224 ...
	I1213 13:30:23.499186  548125 cli_runner.go:164] Run: docker container inspect multinode-901224 --format={{.State.Status}}
	I1213 13:30:23.518155  548125 status.go:371] multinode-901224 host status = "Running" (err=<nil>)
	I1213 13:30:23.518204  548125 host.go:66] Checking if "multinode-901224" exists ...
	I1213 13:30:23.518437  548125 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-901224
	I1213 13:30:23.534668  548125 host.go:66] Checking if "multinode-901224" exists ...
	I1213 13:30:23.534989  548125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:30:23.535039  548125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-901224
	I1213 13:30:23.551232  548125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/multinode-901224/id_rsa Username:docker}
	I1213 13:30:23.644181  548125 ssh_runner.go:195] Run: systemctl --version
	I1213 13:30:23.650172  548125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:30:23.662987  548125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:30:23.716404  548125 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-13 13:30:23.707088031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:30:23.717041  548125 kubeconfig.go:125] found "multinode-901224" server: "https://192.168.67.2:8443"
	I1213 13:30:23.717079  548125 api_server.go:166] Checking apiserver status ...
	I1213 13:30:23.717115  548125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 13:30:23.728245  548125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	W1213 13:30:23.736197  548125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 13:30:23.736259  548125 ssh_runner.go:195] Run: ls
	I1213 13:30:23.739798  548125 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 13:30:23.743821  548125 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 13:30:23.743841  548125 status.go:463] multinode-901224 apiserver status = Running (err=<nil>)
	I1213 13:30:23.743850  548125 status.go:176] multinode-901224 status: &{Name:multinode-901224 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:30:23.743867  548125 status.go:174] checking status of multinode-901224-m02 ...
	I1213 13:30:23.744085  548125 cli_runner.go:164] Run: docker container inspect multinode-901224-m02 --format={{.State.Status}}
	I1213 13:30:23.762920  548125 status.go:371] multinode-901224-m02 host status = "Running" (err=<nil>)
	I1213 13:30:23.762941  548125 host.go:66] Checking if "multinode-901224-m02" exists ...
	I1213 13:30:23.763171  548125 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-901224-m02
	I1213 13:30:23.779894  548125 host.go:66] Checking if "multinode-901224-m02" exists ...
	I1213 13:30:23.780124  548125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 13:30:23.780160  548125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-901224-m02
	I1213 13:30:23.797855  548125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33290 SSHKeyPath:/home/jenkins/minikube-integration/22122-390571/.minikube/machines/multinode-901224-m02/id_rsa Username:docker}
	I1213 13:30:23.890544  548125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 13:30:23.902157  548125 status.go:176] multinode-901224-m02 status: &{Name:multinode-901224-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:30:23.902189  548125 status.go:174] checking status of multinode-901224-m03 ...
	I1213 13:30:23.902491  548125 cli_runner.go:164] Run: docker container inspect multinode-901224-m03 --format={{.State.Status}}
	I1213 13:30:23.920138  548125 status.go:371] multinode-901224-m03 host status = "Stopped" (err=<nil>)
	I1213 13:30:23.920158  548125 status.go:384] host is not running, skipping remaining checks
	I1213 13:30:23.920165  548125 status.go:176] multinode-901224-m03 status: &{Name:multinode-901224-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-901224 node start m03 -v=5 --alsologtostderr: (6.441489205s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-901224
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-901224
E1213 13:30:36.824013  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-901224: (31.265916463s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901224 --wait=true -v=5 --alsologtostderr
E1213 13:31:51.580593  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901224 --wait=true -v=5 --alsologtostderr: (49.737030522s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-901224
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-901224 node delete m03: (4.621865519s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-901224 stop: (28.325286952s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901224 status: exit status 7 (97.221764ms)

                                                
                                                
-- stdout --
	multinode-901224
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-901224-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr: exit status 7 (99.020964ms)

                                                
                                                
-- stdout --
	multinode-901224
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-901224-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:32:25.878888  557935 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:32:25.878981  557935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:32:25.878989  557935 out.go:374] Setting ErrFile to fd 2...
	I1213 13:32:25.878993  557935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:32:25.879187  557935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:32:25.879350  557935 out.go:368] Setting JSON to false
	I1213 13:32:25.879374  557935 mustload.go:66] Loading cluster: multinode-901224
	I1213 13:32:25.879456  557935 notify.go:221] Checking for updates...
	I1213 13:32:25.879726  557935 config.go:182] Loaded profile config "multinode-901224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:32:25.879739  557935 status.go:174] checking status of multinode-901224 ...
	I1213 13:32:25.880184  557935 cli_runner.go:164] Run: docker container inspect multinode-901224 --format={{.State.Status}}
	I1213 13:32:25.899301  557935 status.go:371] multinode-901224 host status = "Stopped" (err=<nil>)
	I1213 13:32:25.899350  557935 status.go:384] host is not running, skipping remaining checks
	I1213 13:32:25.899365  557935 status.go:176] multinode-901224 status: &{Name:multinode-901224 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 13:32:25.899435  557935 status.go:174] checking status of multinode-901224-m02 ...
	I1213 13:32:25.899830  557935 cli_runner.go:164] Run: docker container inspect multinode-901224-m02 --format={{.State.Status}}
	I1213 13:32:25.917391  557935 status.go:371] multinode-901224-m02 host status = "Stopped" (err=<nil>)
	I1213 13:32:25.917410  557935 status.go:384] host is not running, skipping remaining checks
	I1213 13:32:25.917420  557935 status.go:176] multinode-901224-m02 status: &{Name:multinode-901224-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (43.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901224 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901224 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.282468195s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-901224 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (43.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-901224
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901224-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-901224-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.307915ms)

                                                
                                                
-- stdout --
	* [multinode-901224-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-901224-m02' is duplicated with machine name 'multinode-901224-m02' in profile 'multinode-901224'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-901224-m03 --driver=docker  --container-runtime=crio
E1213 13:33:10.801538  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-901224-m03 --driver=docker  --container-runtime=crio: (22.546065646s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-901224
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-901224: exit status 80 (290.315881ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-901224 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-901224-m03 already exists in multinode-901224-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-901224-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-901224-m03: (2.329011479s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.30s)

                                                
                                    
x
+
TestPreload (82.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-045091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-045091 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (47.461596275s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-045091 image pull gcr.io/k8s-minikube/busybox
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-045091
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-045091: (6.194577541s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-045091 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1213 13:34:33.867943  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-045091 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.183572182s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-045091 image list
helpers_test.go:176: Cleaning up "test-preload-045091" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-045091
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-045091: (2.368714715s)
--- PASS: TestPreload (82.27s)

                                                
                                    
x
+
TestScheduledStopUnix (97.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-345214 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-345214 --memory=3072 --driver=docker  --container-runtime=crio: (21.123748516s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345214 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:35:22.767642  574948 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:35:22.767935  574948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:35:22.767945  574948 out.go:374] Setting ErrFile to fd 2...
	I1213 13:35:22.767949  574948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:35:22.768149  574948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:35:22.768380  574948 out.go:368] Setting JSON to false
	I1213 13:35:22.768467  574948 mustload.go:66] Loading cluster: scheduled-stop-345214
	I1213 13:35:22.768754  574948 config.go:182] Loaded profile config "scheduled-stop-345214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:35:22.768840  574948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/config.json ...
	I1213 13:35:22.769007  574948 mustload.go:66] Loading cluster: scheduled-stop-345214
	I1213 13:35:22.769098  574948 config.go:182] Loaded profile config "scheduled-stop-345214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-345214 -n scheduled-stop-345214
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:35:23.144986  575098 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:35:23.145410  575098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:35:23.145419  575098 out.go:374] Setting ErrFile to fd 2...
	I1213 13:35:23.145424  575098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:35:23.145627  575098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:35:23.145906  575098 out.go:368] Setting JSON to false
	I1213 13:35:23.146092  575098 daemonize_unix.go:73] killing process 574983 as it is an old scheduled stop
	I1213 13:35:23.146204  575098 mustload.go:66] Loading cluster: scheduled-stop-345214
	I1213 13:35:23.146538  575098 config.go:182] Loaded profile config "scheduled-stop-345214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:35:23.146615  575098 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/config.json ...
	I1213 13:35:23.146850  575098 mustload.go:66] Loading cluster: scheduled-stop-345214
	I1213 13:35:23.146959  575098 config.go:182] Loaded profile config "scheduled-stop-345214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 13:35:23.151538  394130 retry.go:31] will retry after 113.86µs: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.152743  394130 retry.go:31] will retry after 199.715µs: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.153928  394130 retry.go:31] will retry after 212.006µs: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.155059  394130 retry.go:31] will retry after 215.107µs: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.156186  394130 retry.go:31] will retry after 346.165µs: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.157332  394130 retry.go:31] will retry after 1.091888ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.159538  394130 retry.go:31] will retry after 1.349515ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.161718  394130 retry.go:31] will retry after 1.527387ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.163925  394130 retry.go:31] will retry after 1.701619ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.166141  394130 retry.go:31] will retry after 3.696109ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.170365  394130 retry.go:31] will retry after 8.243028ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.179600  394130 retry.go:31] will retry after 9.773974ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.189880  394130 retry.go:31] will retry after 14.657005ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.205142  394130 retry.go:31] will retry after 18.472774ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.224381  394130 retry.go:31] will retry after 26.086146ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
I1213 13:35:23.250555  394130 retry.go:31] will retry after 64.460686ms: open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345214 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1213 13:35:36.824541  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-345214 -n scheduled-stop-345214
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-345214
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345214 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 13:35:49.058556  575742 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:35:49.058670  575742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:35:49.058679  575742 out.go:374] Setting ErrFile to fd 2...
	I1213 13:35:49.058683  575742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:35:49.058868  575742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:35:49.059083  575742 out.go:368] Setting JSON to false
	I1213 13:35:49.059157  575742 mustload.go:66] Loading cluster: scheduled-stop-345214
	I1213 13:35:49.059483  575742 config.go:182] Loaded profile config "scheduled-stop-345214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 13:35:49.059549  575742 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/scheduled-stop-345214/config.json ...
	I1213 13:35:49.059733  575742 mustload.go:66] Loading cluster: scheduled-stop-345214
	I1213 13:35:49.059849  575742 config.go:182] Loaded profile config "scheduled-stop-345214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-345214
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-345214: exit status 7 (79.392316ms)

                                                
                                                
-- stdout --
	scheduled-stop-345214
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-345214 -n scheduled-stop-345214
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-345214 -n scheduled-stop-345214: exit status 7 (78.322396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-345214" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-345214
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-345214: (4.740814249s)
--- PASS: TestScheduledStopUnix (97.38s)

                                                
                                    
x
+
TestInsufficientStorage (8.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-215505 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-215505 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.220495935s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"946256ae-c925-4aeb-a260-ff2eefb45033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-215505] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"235a95a4-3c67-4bd0-9662-b8a10502ebbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"5a825e8e-513c-4b46-b4e6-6bfe8fd0272e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0154d519-4e26-430d-96cf-920248248e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig"}}
	{"specversion":"1.0","id":"8b586d8a-e89e-4bbd-9008-cd34151c5fb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube"}}
	{"specversion":"1.0","id":"4b3642b4-92a0-49d9-88c0-b3b8ba59cbc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f5875c0c-4158-456c-aa2f-5dbe843aa5e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"414a7aed-d208-4bd1-a31f-eddb04f1c0e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7a60d758-7f7c-43ea-84a9-ba8f6e179a7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"02995621-e539-4a91-828f-8ea263642c34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cf79422-8f31-4317-8c16-ab05d79e635f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6533aea4-a27a-40a6-9d6f-dbc7209b892f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-215505\" primary control-plane node in \"insufficient-storage-215505\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bab61cb-23c1-4595-a210-9ab2ad5bc011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0626aa36-c09f-4893-ab0d-7ca18fa13e06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"38c31de4-ed5b-427d-98ea-16c573443694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-215505 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-215505 --output=json --layout=cluster: exit status 7 (283.431093ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-215505","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-215505","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 13:36:45.456366  578268 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-215505" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-215505 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-215505 --output=json --layout=cluster: exit status 7 (286.808074ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-215505","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-215505","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 13:36:45.744313  578378 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-215505" does not appear in /home/jenkins/minikube-integration/22122-390571/kubeconfig
	E1213 13:36:45.754504  578378 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/insufficient-storage-215505/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-215505" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-215505
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-215505: (1.880086685s)
--- PASS: TestInsufficientStorage (8.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (44.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2460199151 start -p running-upgrade-132150 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2460199151 start -p running-upgrade-132150 --memory=3072 --vm-driver=docker  --container-runtime=crio: (19.246052778s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-132150 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-132150 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.603392228s)
helpers_test.go:176: Cleaning up "running-upgrade-132150" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-132150
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-132150: (2.490870703s)
--- PASS: TestRunningBinaryUpgrade (44.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (298.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.659554073s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-899263
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-899263: (2.250121019s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-899263 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-899263 status --format={{.Host}}: exit status 7 (83.286946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.314716513s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-899263 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (102.828765ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-899263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-899263
	    minikube start -p kubernetes-upgrade-899263 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8992632 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-899263 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-899263 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.612660277s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-899263" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-899263
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-899263: (2.543697225s)
--- PASS: TestKubernetesUpgrade (298.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (63.05s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1990338894 start -p missing-upgrade-533439 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1990338894 start -p missing-upgrade-533439 --memory=3072 --driver=docker  --container-runtime=crio: (22.849508048s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-533439
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-533439: (1.769400344s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-533439
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-533439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-533439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.319906839s)
helpers_test.go:176: Cleaning up "missing-upgrade-533439" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-533439
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-533439: (2.369709081s)
--- PASS: TestMissingContainerUpgrade (63.05s)

                                                
                                    
x
+
TestPause/serial/Start (81.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-484783 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1213 13:36:51.579936  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:36:59.894379  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-484783 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.235845749s)
--- PASS: TestPause/serial/Start (81.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (312.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3107834521 start -p stopped-upgrade-627277 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3107834521 start -p stopped-upgrade-627277 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.71897004s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3107834521 -p stopped-upgrade-627277 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3107834521 -p stopped-upgrade-627277 stop: (12.394347123s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-627277 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-627277 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.986630424s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (312.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-484783 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 13:38:10.801952  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-484783 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.303448069s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-835209 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-835209 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (76.690436ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-835209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (19.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-835209 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-835209 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.339261067s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-835209 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (19.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-835209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-835209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.636277209s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-835209 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-835209 status -o json: exit status 2 (298.924715ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-835209","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-835209
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-835209: (1.967922661s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-835209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-835209 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.74575945s)
--- PASS: TestNoKubernetes/serial/Start (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22122-390571/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-835209 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-835209 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.480045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (36.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.411580727s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (19.905935757s)
--- PASS: TestNoKubernetes/serial/ProfileList (36.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-835209
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-835209: (1.49756794s)
--- PASS: TestNoKubernetes/serial/Stop (1.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-835209 --driver=docker  --container-runtime=crio
E1213 13:40:36.820402  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-835209 --driver=docker  --container-runtime=crio: (6.56107407s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-835209 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-835209 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.538192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-884214 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-884214 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (207.742204ms)

                                                
                                                
-- stdout --
	* [false-884214] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 13:40:46.616535  628955 out.go:360] Setting OutFile to fd 1 ...
	I1213 13:40:46.616644  628955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:40:46.616654  628955 out.go:374] Setting ErrFile to fd 2...
	I1213 13:40:46.616660  628955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 13:40:46.616904  628955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-390571/.minikube/bin
	I1213 13:40:46.617391  628955 out.go:368] Setting JSON to false
	I1213 13:40:46.618704  628955 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8595,"bootTime":1765624652,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 13:40:46.618772  628955 start.go:143] virtualization: kvm guest
	I1213 13:40:46.620719  628955 out.go:179] * [false-884214] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 13:40:46.622238  628955 notify.go:221] Checking for updates...
	I1213 13:40:46.622254  628955 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 13:40:46.623458  628955 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 13:40:46.624687  628955 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-390571/kubeconfig
	I1213 13:40:46.627362  628955 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-390571/.minikube
	I1213 13:40:46.628546  628955 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 13:40:46.629624  628955 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 13:40:46.631485  628955 config.go:182] Loaded profile config "kubernetes-upgrade-899263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 13:40:46.631604  628955 config.go:182] Loaded profile config "running-upgrade-132150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:40:46.631707  628955 config.go:182] Loaded profile config "stopped-upgrade-627277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 13:40:46.631859  628955 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 13:40:46.663265  628955 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 13:40:46.663405  628955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 13:40:46.742749  628955 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-13 13:40:46.728587795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 13:40:46.742966  628955 docker.go:319] overlay module found
	I1213 13:40:46.746903  628955 out.go:179] * Using the docker driver based on user configuration
	I1213 13:40:46.748927  628955 start.go:309] selected driver: docker
	I1213 13:40:46.748950  628955 start.go:927] validating driver "docker" against <nil>
	I1213 13:40:46.748965  628955 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 13:40:46.752937  628955 out.go:203] 
	W1213 13:40:46.754205  628955 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 13:40:46.755598  628955 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-884214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-884214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:39:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-899263
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:38:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-627277
contexts:
- context:
cluster: kubernetes-upgrade-899263
user: kubernetes-upgrade-899263
name: kubernetes-upgrade-899263
- context:
cluster: stopped-upgrade-627277
user: stopped-upgrade-627277
name: stopped-upgrade-627277
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-899263
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kubernetes-upgrade-899263/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kubernetes-upgrade-899263/client.key
- name: stopped-upgrade-627277
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-884214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-884214"

                                                
                                                
----------------------- debugLogs end: false-884214 [took: 3.464214291s] --------------------------------
helpers_test.go:176: Cleaning up "false-884214" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-884214
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.352915138s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1213 13:41:51.580466  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.789314107s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-884214 "pgrep -a kubelet"
I1213 13:41:55.470944  394130 config.go:182] Loaded profile config "auto-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (7.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jwvg5" [aa5e1a1e-3eb0-4af4-b08d-1954711df103] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jwvg5" [aa5e1a1e-3eb0-4af4-b08d-1954711df103] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 7.003721352s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (7.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-w9hsj" [59dd37b8-5af8-4522-b4df-f36235e2c583] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00487681s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-884214 "pgrep -a kubelet"
I1213 13:42:09.864510  394130 config.go:182] Loaded profile config "kindnet-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7zfnt" [42bc2bf5-f509-47f3-adad-5c1de51722d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7zfnt" [42bc2bf5-f509-47f3-adad-5c1de51722d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004153579s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (45.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (45.251342323s)
--- PASS: TestNetworkPlugins/group/calico/Start (45.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.032293596s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-627277
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-627277: (1.011228339s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.255549232s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-ccmbv" [028445ad-aa8c-4847-b5e4-02fbf35f61d0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-ccmbv" [028445ad-aa8c-4847-b5e4-02fbf35f61d0] Running
E1213 13:43:10.801609  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-018090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004166036s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-884214 "pgrep -a kubelet"
I1213 13:43:13.685674  394130 config.go:182] Loaded profile config "calico-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ddrq9" [6372d11a-5816-4ff7-b643-0b0ff29211f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ddrq9" [6372d11a-5816-4ff7-b643-0b0ff29211f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004439516s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.084438451s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-884214 "pgrep -a kubelet"
I1213 13:43:41.865308  394130 config.go:182] Loaded profile config "custom-flannel-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rf5dk" [a8eebcd5-91ae-4821-a47c-c6f57246bc58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rf5dk" [a8eebcd5-91ae-4821-a47c-c6f57246bc58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004144778s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-884214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.528575861s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-884214 "pgrep -a kubelet"
I1213 13:44:05.459757  394130 config.go:182] Loaded profile config "enable-default-cni-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tcrv4" [20c1249f-d8b8-4e1d-a73b-6165fcbfc845] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tcrv4" [20c1249f-d8b8-4e1d-a73b-6165fcbfc845] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004233584s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-ldq6c" [bd2213b6-6b02-4647-9116-6e5c28b70200] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004001404s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.797194694s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-884214 "pgrep -a kubelet"
I1213 13:44:15.303178  394130 config.go:182] Loaded profile config "flannel-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5wkpx" [5ed9749b-340b-4fb6-90b8-b18afaa3f80d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5wkpx" [5ed9749b-340b-4fb6-90b8-b18afaa3f80d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004642049s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (45.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.809542965s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (45.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (40.966806595s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-884214 "pgrep -a kubelet"
I1213 13:44:52.866216  394130 config.go:182] Loaded profile config "bridge-884214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-884214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-m28j8" [0067b33b-2523-44da-8779-ee2536bc3c31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 13:44:54.645078  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/addons-802674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-m28j8" [0067b33b-2523-44da-8779-ee2536bc3c31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004612514s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-417583 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [62a21653-fb04-4160-81ac-a3647bfa3884] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [62a21653-fb04-4160-81ac-a3647bfa3884] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004573935s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-417583 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-884214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-884214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-417583 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-417583 --alsologtostderr -v=3: (16.258665573s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-992258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [80cbe112-02fa-49c6-8738-accc0daffc9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [80cbe112-02fa-49c6-8738-accc0daffc9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.044836517s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-992258 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (39.457447484s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583: exit status 7 (98.022397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-417583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-417583 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.77325263s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417583 -n old-k8s-version-417583
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-992258 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-992258 --alsologtostderr -v=3: (16.32777135s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-973953 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c0f02bd5-9f45-405f-81f1-b1df3e55d90d] Pending
helpers_test.go:353: "busybox" [c0f02bd5-9f45-405f-81f1-b1df3e55d90d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c0f02bd5-9f45-405f-81f1-b1df3e55d90d] Running
E1213 13:45:36.820390  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/functional-728225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004091497s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-973953 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-973953 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-973953 --alsologtostderr -v=3: (18.640360244s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258: exit status 7 (81.839787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-992258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-992258 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (52.167677521s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-992258 -n no-preload-992258
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953: exit status 7 (86.123158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-973953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-973953 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.100087491s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-973953 -n embed-certs-973953
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c7c5ad6c-b8c5-45ca-a64a-a6a035816784] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c7c5ad6c-b8c5-45ca-a64a-a6a035816784] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004155933s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-038239 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-038239 --alsologtostderr -v=3: (16.75577174s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-v5gzb" [12bf6f7a-a070-4d1c-a202-1b73285ad918] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003414045s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-v5gzb" [12bf6f7a-a070-4d1c-a202-1b73285ad918] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003958284s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-417583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-417583 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239: exit status 7 (93.564547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-038239 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-038239 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.399593236s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-038239 -n default-k8s-diff-port-038239
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (26.305676789s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-dkjpg" [5eda8208-36f8-430e-aa77-eb10d4bc5769] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004206235s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-dkjpg" [5eda8208-36f8-430e-aa77-eb10d4bc5769] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00342952s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-992258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9zb5p" [4c8daaac-7546-4f7d-a09c-a667c2a384b7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003773198s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-992258 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9zb5p" [4c8daaac-7546-4f7d-a09c-a667c2a384b7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003981999s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-973953 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-973953 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-362964 --alsologtostderr -v=3
E1213 13:47:08.702664  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 13:47:13.825000  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-362964 --alsologtostderr -v=3: (7.966560378s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-zlkps" [1875c354-6bf3-4786-b35a-cac99170722a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004336504s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964: exit status 7 (80.12711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-362964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 13:47:16.133013  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/auto-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-362964 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.0911417s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-362964 -n newest-cni-362964
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-zlkps" [1875c354-6bf3-4786-b35a-cac99170722a] Running
E1213 13:47:24.066749  394130 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kindnet-884214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004095491s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-038239 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-362964 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-038239 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
380 TestNetworkPlugins/group/kubenet 3.53
388 TestNetworkPlugins/group/cilium 4
394 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-884214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-884214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:39:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-899263
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:38:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-627277
contexts:
- context:
cluster: kubernetes-upgrade-899263
user: kubernetes-upgrade-899263
name: kubernetes-upgrade-899263
- context:
cluster: stopped-upgrade-627277
user: stopped-upgrade-627277
name: stopped-upgrade-627277
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-899263
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kubernetes-upgrade-899263/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kubernetes-upgrade-899263/client.key
- name: stopped-upgrade-627277
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-884214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-884214"

                                                
                                                
----------------------- debugLogs end: kubenet-884214 [took: 3.354857481s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-884214" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-884214
--- SKIP: TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-884214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-884214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:39:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-899263
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:40:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-132150
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-390571/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 13:38:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-627277
contexts:
- context:
cluster: kubernetes-upgrade-899263
user: kubernetes-upgrade-899263
name: kubernetes-upgrade-899263
- context:
cluster: running-upgrade-132150
user: running-upgrade-132150
name: running-upgrade-132150
- context:
cluster: stopped-upgrade-627277
user: stopped-upgrade-627277
name: stopped-upgrade-627277
current-context: running-upgrade-132150
kind: Config
users:
- name: kubernetes-upgrade-899263
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kubernetes-upgrade-899263/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/kubernetes-upgrade-899263/client.key
- name: running-upgrade-132150
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/running-upgrade-132150/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/running-upgrade-132150/client.key
- name: stopped-upgrade-627277
user:
client-certificate: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.crt
client-key: /home/jenkins/minikube-integration/22122-390571/.minikube/profiles/stopped-upgrade-627277/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-884214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-884214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884214"

                                                
                                                
----------------------- debugLogs end: cilium-884214 [took: 3.838335913s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-884214" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-884214
--- SKIP: TestNetworkPlugins/group/cilium (4.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-031848" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-031848
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard